aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* Revert "ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)"Bryan Newbold2020-08-111-1/+1
| | | | | | | This reverts commit 92bf9bc28ac0eacab2e06fa3b25b52f0882804c2. In practice, in prod, this resulted in much larger spn2-cdx-lookup-failure error rates.
* ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)Bryan Newbold2020-08-111-1/+1
| | | | | | | | As we are moving towards just retrying entire ingest requests, we should probably just make this zero. But until then we should give SPN CDX a small chance to sync before giving up. This change expected to improve overall throughput.
* ingest: actually use force_get flag with SPNBryan Newbold2020-08-111-0/+13
| | | | | | The code path was there, but wasn't actually flagging in our most popular daily domains yet. Hopefully will make a big difference in SPN throughput.
* check for simple URL patterns that are usually paywalls or loginwallsBryan Newbold2020-08-112-0/+29
|
* ingest: check for URL blocklist and cookie URL patterns on every hopBryan Newbold2020-08-111-0/+13
|
* refactor: force_get -> force_simple_getBryan Newbold2020-08-112-8/+8
| | | | | For clarity. The SPNv2 API hasn't changed, just changing the variable/parameter name.
* html: extract eprints PDF url (eg, ub.uni-heidelberg.de)Bryan Newbold2020-08-111-0/+2
|
* extract PDF urls for e-periodica.chBryan Newbold2020-08-101-0/+6
|
* more bad sha1Bryan Newbold2020-08-101-0/+2
|
* another bad PDF sha1Bryan Newbold2020-08-101-0/+1
|
* add hkvalidate.perfdrive.com to domain blocklistBryan Newbold2020-08-081-0/+3
|
* fix tests passing str as HTMLBryan Newbold2020-08-081-3/+3
|
* add more HTML extraction tricksBryan Newbold2020-08-081-2/+29
|
* rwth-aachen.de HTML extract, and a generic URL guess methodBryan Newbold2020-08-081-0/+15
|
* another PDF hash to skipBryan Newbold2020-08-081-0/+1
|
* another sha1Bryan Newbold2020-08-071-0/+1
|
* another sha1Bryan Newbold2020-08-061-0/+1
|
* and more bad sha1Bryan Newbold2020-08-061-0/+3
|
* more pdfextract skip sha1hexBryan Newbold2020-08-061-9/+12
|
* grobid+pdftext missing catch-up commandsBryan Newbold2020-08-055-10/+150
|
* commit stats from a couple weeks backBryan Newbold2020-08-051-0/+347
|
* sql stats commands updatesBryan Newbold2020-08-051-2/+2
|
* MAG ingest follow-up notesBryan Newbold2020-08-051-0/+194
|
* more bad PDF sha1; print sha1 before poppler extractBryan Newbold2020-08-051-0/+7
|
* spn2: skip js behavior (experiment)Bryan Newbold2020-08-051-0/+1
| | | | | Hoping this will increase crawling throughput with little-to-no impact on fidelity.
* SPN2: ensure not fetching outlinksBryan Newbold2020-08-051-0/+1
|
* another bad PDF sha1Bryan Newbold2020-08-041-0/+1
|
* another PDF sha1hexBryan Newbold2020-07-271-0/+1
|
* yet another 'bad' PDF sha1hexBryan Newbold2020-07-271-0/+1
|
* use new SPNv2 'skip_first_archive' paramBryan Newbold2020-07-221-0/+1
| | | | For speed and efficiency.
* MAG 2020-07 ingest notesBryan Newbold2020-07-081-0/+159
|
* add more slow PDF hashesBryan Newbold2020-07-051-0/+2
|
* add another bad PDF sha1hexBryan Newbold2020-07-021-0/+1
|
* seaweedfs proposal: fix typos and wordingMartin Czygan2020-07-011-9/+11
|
* another bad PDF SHA-1Bryan Newbold2020-06-301-0/+1
|
* hack to unblock thumbnail processing pipelineBryan Newbold2020-06-291-0/+16
| | | | | | Some PDFs taking 10+ minutes to process, causing kafka exceptions and consumer churn. Not sure why kafka json pusher timeouts are not catching these.
* customize timeout per worker; 120sec for pdf-extractBryan Newbold2020-06-293-2/+4
| | | | | This is a stab-in-the-dark attempt to resolve long timeouts with this worker in prod.
* handle empty fetched blobBryan Newbold2020-06-271-1/+6
|
* CDX KeyError as WaybackError from fetch workerBryan Newbold2020-06-261-1/+1
|
* handle None 'metadata' field correctlyBryan Newbold2020-06-261-1/+1
|
* handle non-success case of parsing extract from JSON/dictBryan Newbold2020-06-261-1/+1
|
* report revisit non-200 as a WaybackErrorBryan Newbold2020-06-261-7/+7
|
* Revert "simpler handling of null PDF text pages"Bryan Newbold2020-06-251-4/+11
| | | | | | This reverts commit 254f24ad6566c9d4b5814868911b604802847b58. Attribute was actually internal to text() call, not a None page.
* simpler handling of null PDF text pagesBryan Newbold2020-06-251-11/+4
|
* pdfextract: attributerror with text extractionBryan Newbold2020-06-251-4/+12
|
* catch UnicodeDecodeError in pdfextractBryan Newbold2020-06-251-1/+10
|
* don't nest generic fetch errors under pdf_trioBryan Newbold2020-06-251-12/+6
| | | | This came from sloppy refactoring (and missing test coverage)
* pdfextract: handle too-large fulltextBryan Newbold2020-06-251-0/+17
|
* another bad/non PDF test; catch correct errorBryan Newbold2020-06-252-1/+6
| | | | | | This test doesn't actually catch the error. I'm not sure why type checks don't discover the "LockedDocumentError not part of poppler" issue though.
* pdfextract: catch poppler.LockedDocumentErrorBryan Newbold2020-06-251-1/+1
|