I could also mention that it is possible to script the creation of the Darwin Core Archives and then use the GBIF Registry API for the connections with GBIF. Symbiota, PlutoF and some others are successfully doing this.
It does require some initial coordination with our Product Team on how to set up and coordinate the registration process and potentially with our Informatics Team.
Best,
Laura
Laura Anne Russell
Programme Officer for Participation and Engagement
Global Biodiversity Information Facility (GBIF) Secretariat
larussell@gbif.org (email)
laura.anne.russell (Skype)
@pagodarose (Twitter)
#CiteTheDOI @GBIF
+45 35 33 35 51 (office, direct line)
GBIF
Universitetsparken 15
DK-2100 Copenhagen Ø
Denmark
From: IPT <ipt-bounces@lists.gbif.org> on behalf of "Simpson, Annie" <asimpson@usgs.gov>
Date: Tuesday, 7 July 2020 at 16.48
To: "ipt@lists.gbif.org" <ipt@lists.gbif.org>
Subject: [IPT] How does one upload large datasets to GBIF?
Colleagues:
What is the easiest or most popular way to send large datasets to GBIF, ones that are too large for the IPT software (I think that is more than 100MB zipped, 10+million records)? Does one modify
their IPT instance? How? Or is there another process that is preferred?
We currently have IPT Version 2.3.6-r3985b6a installed and plan to upgrade to 2.4.0 soon.
A technical answer is what I seek (on behalf of our technical team).
Again my apologies if the answer to my question is easily found and I'm just not finding it.
Annie Simpson, BISON product owner
(she/her/hers)
BioFoundational Data Team
Science Analytics & Synthesis Program
U.S. Geological Survey
12201 Sunrise Valley Dr. Mailstop 302
Reston VA 20192
asimpson@usgs.gov
+1 703-648-4281
bison.usgs.gov |