aboutsummaryrefslogtreecommitdiffstats
path: root/python/sandcrawler
Commit message (Collapse)AuthorAgeFilesLines
...
* block isiarticles.com from future PDF crawlsBryan Newbold2022-04-201-0/+2
|
* ingest: drive.google.com ingest supportBryan Newbold2022-04-041-0/+8
|
* filesets: fix archive.org path namingBryan Newbold2022-03-291-7/+8
|
* bugfix: sha1/md5 typoBryan Newbold2022-03-231-1/+1
| | | | Caught this prepping to ingest in to fatcat. Derp!
* file ingest: don't 'backoff' on spn2 backoff errorBryan Newbold2022-03-222-0/+8
| | | | | | | | The intent of this is to try and get through the daily ingest requests faster, so we can loop and retry if needed. A 200 second delay, usually resulting in a kafka topic reshuffle, really slows things down. This will presumably result in a bunch of spn2-backoff status requests, but we can just retry those.
* small lint/typo/fmt fixesBryan Newbold2022-02-243-5/+5
|
* another bad PDF sha1Bryan Newbold2022-02-231-0/+1
|
* ingest: fix mistakenly commented except block (?)Bryan Newbold2022-02-181-4/+3
|
* ingest: handle more fileset failure modesBryan Newbold2022-02-182-3/+30
|
* yet another bad PDF sha1Bryan Newbold2022-02-081-0/+1
|
* sandcrawler: additional extracts, mostly OJSBryan Newbold2022-01-131-1/+23
|
* filesets: more figshare URL patternsBryan Newbold2022-01-131-0/+13
|
* fileset ingest: better verification of resourcesBryan Newbold2022-01-131-7/+23
|
* ingest: PDF pattern for integrityresjournals.orgBryan Newbold2022-01-131-0/+8
|
* null-body -> empty-blobBryan Newbold2022-01-133-4/+8
|
* spn: handle blocked-url (etc) betterBryan Newbold2022-01-111-0/+10
|
* filesets: handle weird figshare link-only case betterBryan Newbold2021-12-161-1/+4
|
* lint ('not in')Bryan Newbold2021-12-151-2/+2
|
* more fileset ingest tweaksBryan Newbold2021-12-152-0/+7
|
* fileset ingest: more requests timeouts, sessionsBryan Newbold2021-12-153-37/+68
|
* fileset ingest: create tmp subdirectories if neededBryan Newbold2021-12-151-0/+5
|
* fileset ingest: configure IA session from envBryan Newbold2021-12-151-1/+6
| | | | | Note that this doesn't currently work for `upload()`, and as a work-around I created `~/.config/ia.ini` manually on the worker VM.
* fileset ingest: actually use spn2 CLI flagBryan Newbold2021-12-112-3/+4
|
* grobid: set a maximum file size (256 MByte)Bryan Newbold2021-12-071-0/+8
|
* codespell typos in python (comments)Bryan Newbold2021-11-244-4/+4
|
* html_meta: actual typo in code (CSS selector) caught by codespellBryan Newbold2021-11-241-1/+1
|
* make fmtBryan Newbold2021-11-161-1/+1
|
* SPNv2: make 'resources' optionalBryan Newbold2021-11-161-1/+1
| | | | | | | | This was always present previously. A change was made to SPNv2 API recently that borked it a bit, though in theory should be present on new captures. I'm not seeing it for some captures, so pushing this work around. It seems like we don't actually use this field anyways, at least for ingest pipeline.
* grobid: handle XML parsing errors, and have them recorded in sandcrawler-dbBryan Newbold2021-11-121-1/+5
|
* ingest_file: more efficient GROBID metadata copyBryan Newbold2021-11-121-3/+3
|
* ingest: start re-processing GROBID with newer versionBryan Newbold2021-11-101-2/+6
|
* simple persist worker/tool to backfill grobid_refsBryan Newbold2021-11-101-0/+40
|
* grobid: extract more metadata in document TEI-XMLBryan Newbold2021-11-101-0/+5
|
* grobid: update 'TODO' comment based on reviewBryan Newbold2021-11-041-3/+0
|
* crossref grobid refs: another error case (ReadTimeout)Bryan Newbold2021-11-042-5/+11
| | | | | With this last exception handled, was about to get through millions of rows of references, with only a few dozen errors (mostly invalid XML).
* db (postgrest): actually use an HTTP sessionBryan Newbold2021-11-041-12/+24
| | | | Not as important with GET as POST, I think, but still best practice.
* grobid: use requests sessionBryan Newbold2021-11-041-3/+4
| | | | | | This should fix an embarassing bug with exhausting local ports: requests.exceptions.ConnectionError: HTTPConnectionPool(host='wbgrp-svc096.us.archive.org', port=8070): Max retries exceeded with url: /api/processCitationList (Caused by NewConnectionError('<urllib3.connection.HTTPConnection object at 0x7f8dfc24e250>: Failed to establish a new connection: [Errno 99] Cannot assign requested address'))
* grobid crossref refs: try to handle HTTP 5xx and XML parse errorsBryan Newbold2021-11-042-5/+33
|
* grobid: handle weird whitespace unstructured from crossrefBryan Newbold2021-11-041-1/+10
| | | | See also: https://github.com/kermitt2/grobid/issues/849
* crossref persist: make GROBID ref parsing an option (not default)Bryan Newbold2021-11-041-7/+16
|
* glue, utils, and worker code for crossref and grobid_refsBryan Newbold2021-11-042-3/+151
|
* iterated GROBID citation cleaning and processingBryan Newbold2021-11-041-27/+45
| | | | Switched to using just 'key'/'id' for downstream matching.
* grobid citations: first pass at cleaning unstructuredBryan Newbold2021-11-041-2/+34
|
* initial crossref-refs via GROBID helper routineBryan Newbold2021-11-041-4/+121
|
* pdftrio client: use HTTP session for POSTsBryan Newbold2021-11-031-1/+1
|
* workers: use HTTP session for archive.org fetchesBryan Newbold2021-11-031-3/+3
|
* IA (wayback): actually use an HTTP session for replay fetchesBryan Newbold2021-11-031-2/+3
| | | | | | | | I am embarassed this wasn't actually the case already! Looks like I had even instantiated a session but wasn't using it. Hopefully this change, which adds extra retries and better backoff behavior, will improve sandcrawler ingest throughput.
* remove grobid2json helper file, replace with grobid_tei_xmlBryan Newbold2021-10-272-4/+5
|
* small type annotation things from additional packagesBryan Newbold2021-10-272-5/+14
|
* make fmt (black 21.9b0)Bryan Newbold2021-10-2718-1840/+2332
|