Hi all,
I'm trying to get all the years of occurrence for a specific organism.
I tried pagination and with the limit and offset parameters. For example:
http://api.gbif.org/v1/occurrence/search?offset=1&limit=30
But I noticed that each successive page has duplicate results. For example:
for the scientific name: Aaptos aaptos, I tried limit= 30 and increased the
offset parameter as-
offset=0&limit=30 and offset=1&limit=30
The two queries have 29 common results.
Is this supposed to happen or am I doing something wrong?
Thanks,
--
Akshat Pant
*Graduate Student | University of Maryland*
Master of Information Management
Hi,
I am trying to use the asynchronous download service.
I used the following curl query:
curl -i --user akshat26:**** -H "Content-Type: application/json" -X POST -d
@query.json 'https://api.gbif-uat.org/v1/occurrence/download/request'
But the download link doesn't seem to be working.
HTTP/1.1 201 Created
Date: Fri, 20 Apr 2018 23:33:25 GMT
Content-Type: application/json
Location:
http://api.gbif-uat.org/occurrence/download/request/0000049-180416113528795
Access-Control-Allow-Origin: *
Access-Control-Allow-Methods: HEAD, GET, POST, DELETE, PUT
Connection: close
Server: Jetty(9.2.z-SNAPSHOT)
curl -Ss http://api.gbif.org/v1/occurrence/download/0000049-180416113528795
I hope someone can help
--
Akshat Pant
*Graduate Student | University of Maryland*
Master of Information Management
Hi all,
Dimiano (in CC) has identified a potential problem with the species API.
see https://github.com/ropensci/rgbif/issues/299 In particular the
/species/search route. The gist is that we think there's duplicates
returned when paging through that route for a given search.
This comment
https://github.com/ropensci/rgbif/issues/299#issuecomment-378775164 has
curl based examples so you can replicate.
Is it a possible GBIF problem, or are we doing something wrong?
Thanks, Scott
Regarding the archive available for:
https://www.gbif.org/dataset/d7dddbf4-2cf0-4f39-9b2a-bb099caae36c
I attempted to download the archive using a command-line tool but cannot because:
We’re sorry, but GBIF doesn’t work properly without JavaScript enabled.
I started the download from my browser and the expected download time keeps bouncing between 8 hours and 12 hours.
Two questions:
1. Is there a location on the Internet where I can download that file in a reasonable timeframe?
2. Is there a url that will function properly to download the file without requiring javascript?
Is there any interest in developing mirrors or a CDN to host some of these things within the community?
Thanks,
Dan Stoner
iDigBio / ACIS Laboratory
University of Florida
Hi,
Is there any further information on what a "shortname" - used in the API
route
/species/root/{uuid|shortname}
described at https://www.gbif.org/developer/species
I assume "uuid|shortname" is uuid OR a shortname. So perhaps shortname is
a short name for a dataset?
Thanks, Scott
Dear GBIF users,
In recent months, usage of the GBIF.org website and APIs has increased
significantly. Many more users are downloading data from GBIF.org, and
it is no longer practical for us to keep all download files indefinitely.
Starting from yesterday [1], new download files made through GBIF.org
will be marked for deletion six months in the future. Users can request
that their download files be kept for a longer period through
www.GBIF.org [2]. If a download has been cited in research, we will
keep it for as long as possible – notify us of this through the website.
DOIs are still generated, and the download metadata (the filter used,
datasets involved and so on) will be kept forever. Only the ZIP file
itself will be deleted.
Email reminders will be sent before download files are deleted.
For the moment, almost all downloads from before 2018-02-12 will be
kept. A few users have multiple very large downloads, these account for
the majority of the storage space used, and in many cases appear to be
made in error. I will send individual emails to these users asking if
they still need to retain these downloads.
Thank you,
Matthew Blissett
[1] Starting from 2018-02-12 14:05 UTC, to be precise.
[2] To request a download file be kept for longer, go to the download
page, perhaps via https://www.gbif.org/user/download, and click
"Postpone Deletion". Only downloads with a future deletion date will
show the option to postpone.
Hey,
Is there a way of downloading specific occurrences based on ID/key?
E.g. for http://api.gbif.org/v1/occurrence/1038345215 tried this, but it doesn’t seem to work:
"predicate":
{
"type":"equals",
"key":"OCCURRENCE_ID",
"value":"1038345215"
}
Thanks,
Daniel
Hi all,
I'm trying to get all the years of occurrence for a specific organism.
But I only get 20 results for each organism that I search.
How do I get all the results ?
Thanks,
Akshat