aboutsummaryrefslogtreecommitdiffstats
path: root/python/sandcrawler/ingest.py
Commit message (Collapse)AuthorAgeFilesLines
* xml: catch parse errorBryan Newbold2020-11-191-3/+8
|
* ingest: small html_bibli typoBryan Newbold2020-11-081-1/+1
|
* move some PDF URL extraction into declarative formatBryan Newbold2020-11-081-9/+7
|
* ingest: default to html_biblio for PDF URL extractionBryan Newbold2020-11-081-24/+17
|
* ingest: shorted scope+platform keys; use html_biblio extraction for PDFsBryan Newbold2020-11-081-15/+35
|
* ingest html: return better status based on sniffed scopeBryan Newbold2020-11-081-9/+31
|
* html: start improving scope detectionBryan Newbold2020-11-081-1/+1
|
* ingest: retain html_biblio through hops; all ingest typesBryan Newbold2020-11-081-1/+13
|
* ingest tool: flag for HTML quick mode (CDX-only)Bryan Newbold2020-11-081-1/+2
|
* html: try to detect and mark XHTML (vs. HTML or XML)Bryan Newbold2020-11-081-2/+2
|
* html: handle no-capture for sub-resourcesBryan Newbold2020-11-081-1/+5
|
* ingest: fix null-body caseBryan Newbold2020-11-081-0/+4
| | | | Broke this in earlier refactor.
* html: catch and report exceptions at process_hit() stageBryan Newbold2020-11-061-4/+27
|
* html: pdf and html extract similar to XMLBryan Newbold2020-11-061-2/+25
| | | | Note that the primary PDF URL extraction path is a separate code path.
* html: refactors/tweaks from testingBryan Newbold2020-11-061-4/+5
|
* html: actually publish HTML TEI-XML to body; fix dataflow though ingest a bitBryan Newbold2020-11-041-5/+25
|
* initial implementation of HTML ingest in existing workerBryan Newbold2020-11-041-5/+50
|
* small fixes from local testing for XML ingestBryan Newbold2020-11-031-1/+1
|
* xml: re-encode XML docs into UTF-8 for persistingBryan Newbold2020-11-031-1/+3
|
* ingest: handle publishing XML docs to kafkaBryan Newbold2020-11-031-3/+21
|
* basic support for XML ingest in workerBryan Newbold2020-11-031-23/+40
|
* ingest: cleanups, typing, start generalizing to xml and htmlBryan Newbold2020-11-031-122/+118
|
* ingest: tweak debug printing alignmentBryan Newbold2020-11-031-3/+3
|
* ingest: add more IA domainsBryan Newbold2020-11-031-0/+2
|
* ingest: skip JSTOR DOI prefixesBryan Newbold2020-10-231-0/+3
|
* ingest: fix WaybackContentError typoBryan Newbold2020-10-211-1/+1
|
* ingest: add a check for blocked-cookie before trying PDF url extractionBryan Newbold2020-10-211-0/+11
|
* differential wayback-error from wayback-content-errorBryan Newbold2020-10-211-1/+5
| | | | | | The motivation here is to distinguish errors due to current content in wayback (eg, in WARCs) from operational errors (eg, wayback machine is down, or network failures/disruption).
* ingest: add a cdx-error slowdown delayBryan Newbold2020-10-191-0/+3
|
* ingest: fix old_failure datetimeBryan Newbold2020-10-191-1/+1
|
* ingest: try SPNv2 for no-capture and old failuresBryan Newbold2020-10-191-1/+5
|
* ingest: disable soft404 and non-hit SPNv2 retriesBryan Newbold2020-10-191-4/+5
| | | | | | This might have made sense at some point, but I had forgotten about this code path and it makes no sense now. Has been resulting in very many extraneous SPN requests.
* store no-capture URLs in terminal_urlBryan Newbold2020-10-121-2/+2
|
* ingest: small bugfix to print pdfextract status on SUCCESSBryan Newbold2020-09-171-1/+1
|
* ingest: treat text/xml as XHTML in pdf ingestBryan Newbold2020-09-141-1/+1
|
* additional loginwall patternsBryan Newbold2020-08-111-0/+2
|
* ingest: actually use force_get flag with SPNBryan Newbold2020-08-111-0/+13
| | | | | | The code path was there, but wasn't actually flagging in our most popular daily domains yet. Hopefully will make a big difference in SPN throughput.
* check for simple URL patterns that are usually paywalls or loginwallsBryan Newbold2020-08-111-0/+11
|
* ingest: check for URL blocklist and cookie URL patterns on every hopBryan Newbold2020-08-111-0/+13
|
* refactor: force_get -> force_simple_getBryan Newbold2020-08-111-3/+3
| | | | | For clarity. The SPNv2 API hasn't changed, just changing the variable/parameter name.
* add hkvalidate.perfdrive.com to domain blocklistBryan Newbold2020-08-081-0/+3
|
* pdfextract support in ingest workerBryan Newbold2020-06-251-1/+35
|
* workers: refactor to pass key to process()Bryan Newbold2020-06-171-2/+2
|
* ingest: don't 'want' non-PDF ingestBryan Newbold2020-04-301-0/+5
|
* timeout message implementation for GROBID and ingest workersBryan Newbold2020-04-271-0/+9
|
* ingest: block another large domain (and DOI prefix)Bryan Newbold2020-03-271-0/+2
|
* ingest: clean_url() in more placesBryan Newbold2020-03-231-0/+1
| | | | | | Some 'cdx-error' results were due to URLs with ':' after the hostname or trailing newline ("\n") characters in the URL. This attempts to work around this categroy of error.
* implement (unused) force_get flag for SPN2Bryan Newbold2020-03-181-1/+15
| | | | | | | | | I hoped this feature would make it possible to crawl journals.lww.com PDFs, because the token URLs work with `wget`, but it still doesn't seem to work. Maybe because of user agent? Anyways, this feature might be useful for crawling efficiency, so adding to master.
* url cleaning (canonicalization) for ingest base_urlBryan Newbold2020-03-101-2/+6
| | | | | | | | | | | As mentioned in comment, this first version does not re-write the URL in the `base_url` field. If we did so, then ingest_request rows would not SQL JOIN to ingest_file_result rows, which we wouldn't want. In the future, behaviour should maybe be to refuse to process URLs that aren't clean (eg, if base_url != clean_url(base_url)) and return a 'bad-url' status or soemthing. Then we would only accept clean URLs in both tables, and clear out all old/bad URLs with a cleanup script.
* ingest: make content-decoding more robustBryan Newbold2020-03-031-1/+2
|