<html>
<head>
<meta content="text/html; charset=windows-1252"
http-equiv="Content-Type">
</head>
<body bgcolor="#FFFFFF" text="#000000">
<p>Hi Clint,</p>
<p>You're first to report a problem, though trying to download the
file (64-bit Linux, wget) within the GBIF network also failed for
me.</p>
<p>I've copied it with a different method to here:
<a class="moz-txt-link-freetext" href="http://download.gbif.org/2016/12/0039949-160910150852091.zip">http://download.gbif.org/2016/12/0039949-160910150852091.zip</a> -- on
a plain Apache server, so wget's "--continue" option should work
if necessary.<br>
</p>
<p>The MD5 checksum is e976523c9e6c7ec0cd9d3cb30030020b and the size
is exactly 43,184,530,448 bytes.<br>
</p>
If downloading that doesn't work, I could split the file into
chunks. We'll also look into why the download failed [1].<br>
<br>
Cheers,<br>
<br>
Matt Blissett<br>
<br>
[1] <a class="moz-txt-link-freetext" href="http://dev.gbif.org/issues/browse/POR-3199">http://dev.gbif.org/issues/browse/POR-3199</a><br>
<br>
<br>
<div class="moz-cite-prefix">On 07/12/2016 14.53, Coggins, Clint
wrote:<br>
</div>
<blockquote
cite="mid:CAE_DExa-BMNe4S4gJ-KcpP9svKLfYHFTw8us=8b6WE9HmpX1PA@mail.gmail.com"
type="cite">
<meta http-equiv="Content-Type" content="text/html;
charset=windows-1252">
<div dir="ltr">I'm trying to download all the occurrence data with
COUNTRY=US
<div><br>
<div><a moz-do-not-send="true"
href="http://www.gbif.org/occurrence/search?COUNTRY=US">http://www.gbif.org/occurrence/search?COUNTRY=US</a><br
clear="all">
<div><br>
</div>
<div>I've requested a download file, which is here</div>
<div><a moz-do-not-send="true"
href="http://www.gbif.org/occurrence/download/0039949-160910150852091">http://www.gbif.org/occurrence/download/0039949-160910150852091</a><br>
</div>
<div><br>
</div>
<div>However, I've been having a lot of trouble getting the
download to complete since it is so large(43.2GB). I've
tried various browsers and also wget on both Linux and
Mac. It typically fails with a network error in the
browser. wget displayed strange behavior in that it
claimed the download was successful after downloading
4.1GB. This was on 64 bit linux with ext4, so I don't
think there was a filesystem limitation.</div>
<div><br>
</div>
<div>Any ideas on how to download this data? Does it make
sense to write a script to use the API to request the
datasets one by one to split it up?</div>
<div><br>
</div>
<div>Thanks for any help</div>
<div><br>
</div>
-- <br>
<div class="gmail_signature">
<div dir="ltr">Clint Coggins
<div><br>
</div>
</div>
</div>
</div>
</div>
</div>
<br>
<fieldset class="mimeAttachmentHeader"></fieldset>
<br>
<pre wrap="">_______________________________________________
API-users mailing list
<a class="moz-txt-link-abbreviated" href="mailto:API-users@lists.gbif.org">API-users@lists.gbif.org</a>
<a class="moz-txt-link-freetext" href="http://lists.gbif.org/mailman/listinfo/api-users">http://lists.gbif.org/mailman/listinfo/api-users</a>
</pre>
</blockquote>
<br>
</body>
</html>