aboutsummaryrefslogtreecommitdiffstats
path: root/python
Commit message (Collapse)AuthorAgeFilesLines
...
* ingest: check for URL blocklist and cookie URL patterns on every hopBryan Newbold2020-08-111-0/+13
|
* refactor: force_get -> force_simple_getBryan Newbold2020-08-112-8/+8
| | | | | For clarity. The SPNv2 API hasn't changed, just changing the variable/parameter name.
* html: extract eprints PDF url (eg, ub.uni-heidelberg.de)Bryan Newbold2020-08-111-0/+2
|
* extract PDF urls for e-periodica.chBryan Newbold2020-08-101-0/+6
|
* more bad sha1Bryan Newbold2020-08-101-0/+2
|
* another bad PDF sha1Bryan Newbold2020-08-101-0/+1
|
* add hkvalidate.perfdrive.com to domain blocklistBryan Newbold2020-08-081-0/+3
|
* fix tests passing str as HTMLBryan Newbold2020-08-081-3/+3
|
* add more HTML extraction tricksBryan Newbold2020-08-081-2/+29
|
* rwth-aachen.de HTML extract, and a generic URL guess methodBryan Newbold2020-08-081-0/+15
|
* another PDF hash to skipBryan Newbold2020-08-081-0/+1
|
* another sha1Bryan Newbold2020-08-071-0/+1
|
* another sha1Bryan Newbold2020-08-061-0/+1
|
* and more bad sha1Bryan Newbold2020-08-061-0/+3
|
* more pdfextract skip sha1hexBryan Newbold2020-08-061-9/+12
|
* more bad PDF sha1; print sha1 before poppler extractBryan Newbold2020-08-051-0/+7
|
* spn2: skip js behavior (experiment)Bryan Newbold2020-08-051-0/+1
| | | | | Hoping this will increase crawling throughput with little-to-no impact on fidelity.
* SPN2: ensure not fetching outlinksBryan Newbold2020-08-051-0/+1
|
* another bad PDF sha1Bryan Newbold2020-08-041-0/+1
|
* another PDF sha1hexBryan Newbold2020-07-271-0/+1
|
* yet another 'bad' PDF sha1hexBryan Newbold2020-07-271-0/+1
|
* use new SPNv2 'skip_first_archive' paramBryan Newbold2020-07-221-0/+1
| | | | For speed and efficiency.
* add more slow PDF hashesBryan Newbold2020-07-051-0/+2
|
* add another bad PDF sha1hexBryan Newbold2020-07-021-0/+1
|
* another bad PDF SHA-1Bryan Newbold2020-06-301-0/+1
|
* hack to unblock thumbnail processing pipelineBryan Newbold2020-06-291-0/+16
| | | | | | Some PDFs taking 10+ minutes to process, causing kafka exceptions and consumer churn. Not sure why kafka json pusher timeouts are not catching these.
* customize timeout per worker; 120sec for pdf-extractBryan Newbold2020-06-293-2/+4
| | | | | This is a stab-in-the-dark attempt to resolve long timeouts with this worker in prod.
* handle empty fetched blobBryan Newbold2020-06-271-1/+6
|
* CDX KeyError as WaybackError from fetch workerBryan Newbold2020-06-261-1/+1
|
* handle None 'metadata' field correctlyBryan Newbold2020-06-261-1/+1
|
* handle non-success case of parsing extract from JSON/dictBryan Newbold2020-06-261-1/+1
|
* report revisit non-200 as a WaybackErrorBryan Newbold2020-06-261-7/+7
|
* Revert "simpler handling of null PDF text pages"Bryan Newbold2020-06-251-4/+11
| | | | | | This reverts commit 254f24ad6566c9d4b5814868911b604802847b58. Attribute was actually internal to text() call, not a None page.
* simpler handling of null PDF text pagesBryan Newbold2020-06-251-11/+4
|
* pdfextract: attributerror with text extractionBryan Newbold2020-06-251-4/+12
|
* catch UnicodeDecodeError in pdfextractBryan Newbold2020-06-251-1/+10
|
* don't nest generic fetch errors under pdf_trioBryan Newbold2020-06-251-12/+6
| | | | This came from sloppy refactoring (and missing test coverage)
* pdfextract: handle too-large fulltextBryan Newbold2020-06-251-0/+17
|
* another bad/non PDF test; catch correct errorBryan Newbold2020-06-252-1/+6
| | | | | | This test doesn't actually catch the error. I'm not sure why type checks don't discover the "LockedDocumentError not part of poppler" issue though.
* pdfextract: catch poppler.LockedDocumentErrorBryan Newbold2020-06-251-1/+1
|
* pdfextract support in ingest workerBryan Newbold2020-06-253-1/+66
|
* poppler: correct RGBA buffer endian-nessBryan Newbold2020-06-252-2/+2
|
* pdfextract_tool fixes from prod usageBryan Newbold2020-06-252-3/+6
|
* fix tests for page0_height/widthBryan Newbold2020-06-251-2/+2
|
* pdfextract: fix pdf_extra key namesBryan Newbold2020-06-251-2/+2
|
* ensure pdf_meta isn't passed an empty dict()Bryan Newbold2020-06-251-1/+4
|
* args.kafka_env refactor didn't happen (yet)Bryan Newbold2020-06-251-2/+2
|
* s3-only mode persist workers use different consumer groupBryan Newbold2020-06-251-2/+8
|
* changes from prodBryan Newbold2020-06-252-4/+18
|
* sandcrawler_worker: remove duplicate run_pdf_extract()Bryan Newbold2020-06-251-29/+0
|