aboutsummaryrefslogtreecommitdiffstats
path: root/python
Commit message (Expand)AuthorAgeFilesLines
...
* html parse: add another generic fulltext patternBryan Newbold2020-09-141-1/+10
* ingest: treat text/xml as XHTML in pdf ingestBryan Newbold2020-09-141-1/+1
* more bad SHA1 PDFBryan Newbold2020-09-021-0/+2
* another bad PDF sha1Bryan Newbold2020-09-011-0/+1
* another bad PDF sha1Bryan Newbold2020-08-241-0/+1
* html: handle embed with mangled 'src' attributeBryan Newbold2020-08-241-1/+1
* another bad PDF sha1Bryan Newbold2020-08-171-0/+1
* another bad PDF sha1Bryan Newbold2020-08-151-0/+1
* more bad sha1Bryan Newbold2020-08-141-0/+1
* yet more bad PDF sha1Bryan Newbold2020-08-141-0/+2
* more bad SHA1Bryan Newbold2020-08-131-0/+2
* yet another PDF sha1Bryan Newbold2020-08-121-0/+1
* another bad sha1; maybe the last for this batch?Bryan Newbold2020-08-121-0/+1
* more bad sha1Bryan Newbold2020-08-111-0/+2
* additional loginwall patternsBryan Newbold2020-08-111-0/+2
* more SHA1Bryan Newbold2020-08-111-0/+2
* Revert "ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)"Bryan Newbold2020-08-111-1/+1
* ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)Bryan Newbold2020-08-111-1/+1
* ingest: actually use force_get flag with SPNBryan Newbold2020-08-111-0/+13
* check for simple URL patterns that are usually paywalls or loginwallsBryan Newbold2020-08-112-0/+29
* ingest: check for URL blocklist and cookie URL patterns on every hopBryan Newbold2020-08-111-0/+13
* refactor: force_get -> force_simple_getBryan Newbold2020-08-112-8/+8
* html: extract eprints PDF url (eg, ub.uni-heidelberg.de)Bryan Newbold2020-08-111-0/+2
* extract PDF urls for e-periodica.chBryan Newbold2020-08-101-0/+6
* more bad sha1Bryan Newbold2020-08-101-0/+2
* another bad PDF sha1Bryan Newbold2020-08-101-0/+1
* add hkvalidate.perfdrive.com to domain blocklistBryan Newbold2020-08-081-0/+3
* fix tests passing str as HTMLBryan Newbold2020-08-081-3/+3
* add more HTML extraction tricksBryan Newbold2020-08-081-2/+29
* rwth-aachen.de HTML extract, and a generic URL guess methodBryan Newbold2020-08-081-0/+15
* another PDF hash to skipBryan Newbold2020-08-081-0/+1
* another sha1Bryan Newbold2020-08-071-0/+1
* another sha1Bryan Newbold2020-08-061-0/+1
* and more bad sha1Bryan Newbold2020-08-061-0/+3
* more pdfextract skip sha1hexBryan Newbold2020-08-061-9/+12
* more bad PDF sha1; print sha1 before poppler extractBryan Newbold2020-08-051-0/+7
* spn2: skip js behavior (experiment)Bryan Newbold2020-08-051-0/+1
* SPN2: ensure not fetching outlinksBryan Newbold2020-08-051-0/+1
* another bad PDF sha1Bryan Newbold2020-08-041-0/+1
* another PDF sha1hexBryan Newbold2020-07-271-0/+1
* yet another 'bad' PDF sha1hexBryan Newbold2020-07-271-0/+1
* use new SPNv2 'skip_first_archive' paramBryan Newbold2020-07-221-0/+1
* add more slow PDF hashesBryan Newbold2020-07-051-0/+2
* add another bad PDF sha1hexBryan Newbold2020-07-021-0/+1
* another bad PDF SHA-1Bryan Newbold2020-06-301-0/+1
* hack to unblock thumbnail processing pipelineBryan Newbold2020-06-291-0/+16
* customize timeout per worker; 120sec for pdf-extractBryan Newbold2020-06-293-2/+4
* handle empty fetched blobBryan Newbold2020-06-271-1/+6
* CDX KeyError as WaybackError from fetch workerBryan Newbold2020-06-261-1/+1
* handle None 'metadata' field correctlyBryan Newbold2020-06-261-1/+1