aboutsummaryrefslogtreecommitdiffstats
path: root/python
Commit message (Expand)AuthorAgeFilesLines
* differential wayback-error from wayback-content-errorBryan Newbold2020-10-217-18/+22
* persist PDF extraction in ingest pipelineBryan Newbold2020-10-201-4/+16
* ingest: add a cdx-error slowdown delayBryan Newbold2020-10-191-0/+3
* SPN CDX delay now seems reasonable; increase to 40sec to catch mostBryan Newbold2020-10-191-1/+1
* ingest: fix old_failure datetimeBryan Newbold2020-10-191-1/+1
* CDX: when retrying, do so every 3 seconds up to limitBryan Newbold2020-10-191-5/+9
* ingest: try SPNv2 for no-capture and old failuresBryan Newbold2020-10-191-1/+5
* SPN: more verbose status loggingBryan Newbold2020-10-191-0/+4
* ingest: disable soft404 and non-hit SPNv2 retriesBryan Newbold2020-10-191-4/+5
* CDX: revert post-SPN CDX lookup retry to 10 secondsBryan Newbold2020-10-191-1/+1
* ingest: catch wayback-fail-after-SPN as separate statusBryan Newbold2020-10-191-4/+17
* SPN: better log line when starting a requestBryan Newbold2020-10-191-0/+1
* SPN: look for non-200 CDX responsesBryan Newbold2020-10-191-1/+1
* SPN: better check for partial URLs returnedBryan Newbold2020-10-191-2/+2
* CDX fetch: more permissive fuzzy/normalization checkBryan Newbold2020-10-191-3/+9
* ingest: experimentally reduce CDX API retry delayBryan Newbold2020-10-171-1/+1
* ingest: handle cookieAbsent and partial SPNv2 URL reponse cases betterBryan Newbold2020-10-171-0/+31
* and another sha1Bryan Newbold2020-10-131-0/+1
* another day, another bad PDF sha1Bryan Newbold2020-10-131-0/+1
* store no-capture URLs in terminal_urlBryan Newbold2020-10-122-3/+3
* another bad PDF sha1Bryan Newbold2020-10-111-0/+1
* yet more bad sha1 PDFs to skipBryan Newbold2020-10-101-0/+20
* ingest: small bugfix to print pdfextract status on SUCCESSBryan Newbold2020-09-171-1/+1
* more bad PDF sha1Bryan Newbold2020-09-171-0/+2
* yet another broken PDF (sha1)Bryan Newbold2020-09-161-0/+1
* html: handle JMIR URL patternBryan Newbold2020-09-151-0/+6
* skip citation_pdf_url if it is a link loopBryan Newbold2020-09-141-2/+8
* html parse: add another generic fulltext patternBryan Newbold2020-09-141-1/+10
* ingest: treat text/xml as XHTML in pdf ingestBryan Newbold2020-09-141-1/+1
* more bad SHA1 PDFBryan Newbold2020-09-021-0/+2
* another bad PDF sha1Bryan Newbold2020-09-011-0/+1
* another bad PDF sha1Bryan Newbold2020-08-241-0/+1
* html: handle embed with mangled 'src' attributeBryan Newbold2020-08-241-1/+1
* another bad PDF sha1Bryan Newbold2020-08-171-0/+1
* another bad PDF sha1Bryan Newbold2020-08-151-0/+1
* more bad sha1Bryan Newbold2020-08-141-0/+1
* yet more bad PDF sha1Bryan Newbold2020-08-141-0/+2
* more bad SHA1Bryan Newbold2020-08-131-0/+2
* yet another PDF sha1Bryan Newbold2020-08-121-0/+1
* another bad sha1; maybe the last for this batch?Bryan Newbold2020-08-121-0/+1
* more bad sha1Bryan Newbold2020-08-111-0/+2
* additional loginwall patternsBryan Newbold2020-08-111-0/+2
* more SHA1Bryan Newbold2020-08-111-0/+2
* Revert "ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)"Bryan Newbold2020-08-111-1/+1
* ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)Bryan Newbold2020-08-111-1/+1
* ingest: actually use force_get flag with SPNBryan Newbold2020-08-111-0/+13
* check for simple URL patterns that are usually paywalls or loginwallsBryan Newbold2020-08-112-0/+29
* ingest: check for URL blocklist and cookie URL patterns on every hopBryan Newbold2020-08-111-0/+13
* refactor: force_get -> force_simple_getBryan Newbold2020-08-112-8/+8
* html: extract eprints PDF url (eg, ub.uni-heidelberg.de)Bryan Newbold2020-08-111-0/+2