Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | ingest: reduce CDX retry_sleep to 3.0 sec (after SPN) | Bryan Newbold | 2020-08-11 | 1 | -1/+1 |
| | | | | | | | | As we are moving towards just retrying entire ingest requests, we should probably just make this zero. But until then we should give SPN CDX a small chance to sync before giving up. This change expected to improve overall throughput. | ||||
* | ingest: actually use force_get flag with SPN | Bryan Newbold | 2020-08-11 | 1 | -0/+13 |
| | | | | | | The code path was there, but wasn't actually flagging in our most popular daily domains yet. Hopefully will make a big difference in SPN throughput. | ||||
* | check for simple URL patterns that are usually paywalls or loginwalls | Bryan Newbold | 2020-08-11 | 2 | -0/+29 |
| | |||||
* | ingest: check for URL blocklist and cookie URL patterns on every hop | Bryan Newbold | 2020-08-11 | 1 | -0/+13 |
| | |||||
* | refactor: force_get -> force_simple_get | Bryan Newbold | 2020-08-11 | 2 | -8/+8 |
| | | | | | For clarity. The SPNv2 API hasn't changed, just changing the variable/parameter name. | ||||
* | html: extract eprints PDF url (eg, ub.uni-heidelberg.de) | Bryan Newbold | 2020-08-11 | 1 | -0/+2 |
| | |||||
* | extract PDF urls for e-periodica.ch | Bryan Newbold | 2020-08-10 | 1 | -0/+6 |
| | |||||
* | more bad sha1 | Bryan Newbold | 2020-08-10 | 1 | -0/+2 |
| | |||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-10 | 1 | -0/+1 |
| | |||||
* | add hkvalidate.perfdrive.com to domain blocklist | Bryan Newbold | 2020-08-08 | 1 | -0/+3 |
| | |||||
* | fix tests passing str as HTML | Bryan Newbold | 2020-08-08 | 1 | -3/+3 |
| | |||||
* | add more HTML extraction tricks | Bryan Newbold | 2020-08-08 | 1 | -2/+29 |
| | |||||
* | rwth-aachen.de HTML extract, and a generic URL guess method | Bryan Newbold | 2020-08-08 | 1 | -0/+15 |
| | |||||
* | another PDF hash to skip | Bryan Newbold | 2020-08-08 | 1 | -0/+1 |
| | |||||
* | another sha1 | Bryan Newbold | 2020-08-07 | 1 | -0/+1 |
| | |||||
* | another sha1 | Bryan Newbold | 2020-08-06 | 1 | -0/+1 |
| | |||||
* | and more bad sha1 | Bryan Newbold | 2020-08-06 | 1 | -0/+3 |
| | |||||
* | more pdfextract skip sha1hex | Bryan Newbold | 2020-08-06 | 1 | -9/+12 |
| | |||||
* | grobid+pdftext missing catch-up commands | Bryan Newbold | 2020-08-05 | 5 | -10/+150 |
| | |||||
* | commit stats from a couple weeks back | Bryan Newbold | 2020-08-05 | 1 | -0/+347 |
| | |||||
* | sql stats commands updates | Bryan Newbold | 2020-08-05 | 1 | -2/+2 |
| | |||||
* | MAG ingest follow-up notes | Bryan Newbold | 2020-08-05 | 1 | -0/+194 |
| | |||||
* | more bad PDF sha1; print sha1 before poppler extract | Bryan Newbold | 2020-08-05 | 1 | -0/+7 |
| | |||||
* | spn2: skip js behavior (experiment) | Bryan Newbold | 2020-08-05 | 1 | -0/+1 |
| | | | | | Hoping this will increase crawling throughput with little-to-no impact on fidelity. | ||||
* | SPN2: ensure not fetching outlinks | Bryan Newbold | 2020-08-05 | 1 | -0/+1 |
| | |||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-04 | 1 | -0/+1 |
| | |||||
* | another PDF sha1hex | Bryan Newbold | 2020-07-27 | 1 | -0/+1 |
| | |||||
* | yet another 'bad' PDF sha1hex | Bryan Newbold | 2020-07-27 | 1 | -0/+1 |
| | |||||
* | use new SPNv2 'skip_first_archive' param | Bryan Newbold | 2020-07-22 | 1 | -0/+1 |
| | | | | For speed and efficiency. | ||||
* | MAG 2020-07 ingest notes | Bryan Newbold | 2020-07-08 | 1 | -0/+159 |
| | |||||
* | add more slow PDF hashes | Bryan Newbold | 2020-07-05 | 1 | -0/+2 |
| | |||||
* | add another bad PDF sha1hex | Bryan Newbold | 2020-07-02 | 1 | -0/+1 |
| | |||||
* | seaweedfs proposal: fix typos and wording | Martin Czygan | 2020-07-01 | 1 | -9/+11 |
| | |||||
* | another bad PDF SHA-1 | Bryan Newbold | 2020-06-30 | 1 | -0/+1 |
| | |||||
* | hack to unblock thumbnail processing pipeline | Bryan Newbold | 2020-06-29 | 1 | -0/+16 |
| | | | | | | Some PDFs taking 10+ minutes to process, causing kafka exceptions and consumer churn. Not sure why kafka json pusher timeouts are not catching these. | ||||
* | customize timeout per worker; 120sec for pdf-extract | Bryan Newbold | 2020-06-29 | 3 | -2/+4 |
| | | | | | This is a stab-in-the-dark attempt to resolve long timeouts with this worker in prod. | ||||
* | handle empty fetched blob | Bryan Newbold | 2020-06-27 | 1 | -1/+6 |
| | |||||
* | CDX KeyError as WaybackError from fetch worker | Bryan Newbold | 2020-06-26 | 1 | -1/+1 |
| | |||||
* | handle None 'metadata' field correctly | Bryan Newbold | 2020-06-26 | 1 | -1/+1 |
| | |||||
* | handle non-success case of parsing extract from JSON/dict | Bryan Newbold | 2020-06-26 | 1 | -1/+1 |
| | |||||
* | report revisit non-200 as a WaybackError | Bryan Newbold | 2020-06-26 | 1 | -7/+7 |
| | |||||
* | Revert "simpler handling of null PDF text pages" | Bryan Newbold | 2020-06-25 | 1 | -4/+11 |
| | | | | | | This reverts commit 254f24ad6566c9d4b5814868911b604802847b58. Attribute was actually internal to text() call, not a None page. | ||||
* | simpler handling of null PDF text pages | Bryan Newbold | 2020-06-25 | 1 | -11/+4 |
| | |||||
* | pdfextract: attributerror with text extraction | Bryan Newbold | 2020-06-25 | 1 | -4/+12 |
| | |||||
* | catch UnicodeDecodeError in pdfextract | Bryan Newbold | 2020-06-25 | 1 | -1/+10 |
| | |||||
* | don't nest generic fetch errors under pdf_trio | Bryan Newbold | 2020-06-25 | 1 | -12/+6 |
| | | | | This came from sloppy refactoring (and missing test coverage) | ||||
* | pdfextract: handle too-large fulltext | Bryan Newbold | 2020-06-25 | 1 | -0/+17 |
| | |||||
* | another bad/non PDF test; catch correct error | Bryan Newbold | 2020-06-25 | 2 | -1/+6 |
| | | | | | | This test doesn't actually catch the error. I'm not sure why type checks don't discover the "LockedDocumentError not part of poppler" issue though. | ||||
* | pdfextract: catch poppler.LockedDocumentError | Bryan Newbold | 2020-06-25 | 1 | -1/+1 |
| | |||||
* | commented special modes for dump_unextracted_pdf.sql | Bryan Newbold | 2020-06-25 | 1 | -1/+4 |
| |