Commit message (Collapse) | Author | Age | Files | Lines | ||
---|---|---|---|---|---|---|
... | ||||||
* | more bad sha1 | Bryan Newbold | 2020-08-14 | 1 | -0/+1 | |
| | ||||||
* | yet more bad PDF sha1 | Bryan Newbold | 2020-08-14 | 1 | -0/+2 | |
| | ||||||
* | more bad SHA1 | Bryan Newbold | 2020-08-13 | 1 | -0/+2 | |
| | ||||||
* | yet another PDF sha1 | Bryan Newbold | 2020-08-12 | 1 | -0/+1 | |
| | ||||||
* | another bad sha1; maybe the last for this batch? | Bryan Newbold | 2020-08-12 | 1 | -0/+1 | |
| | ||||||
* | more bad sha1 | Bryan Newbold | 2020-08-11 | 1 | -0/+2 | |
| | ||||||
* | additional loginwall patterns | Bryan Newbold | 2020-08-11 | 1 | -0/+2 | |
| | ||||||
* | more SHA1 | Bryan Newbold | 2020-08-11 | 1 | -0/+2 | |
| | ||||||
* | Revert "ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)" | Bryan Newbold | 2020-08-11 | 1 | -1/+1 | |
| | | | | | | | This reverts commit 92bf9bc28ac0eacab2e06fa3b25b52f0882804c2. In practice, in prod, this resulted in much larger spn2-cdx-lookup-failure error rates. | |||||
* | ingest: reduce CDX retry_sleep to 3.0 sec (after SPN) | Bryan Newbold | 2020-08-11 | 1 | -1/+1 | |
| | | | | | | | | As we are moving towards just retrying entire ingest requests, we should probably just make this zero. But until then we should give SPN CDX a small chance to sync before giving up. This change expected to improve overall throughput. | |||||
* | ingest: actually use force_get flag with SPN | Bryan Newbold | 2020-08-11 | 1 | -0/+13 | |
| | | | | | | The code path was there, but wasn't actually flagging in our most popular daily domains yet. Hopefully will make a big difference in SPN throughput. | |||||
* | check for simple URL patterns that are usually paywalls or loginwalls | Bryan Newbold | 2020-08-11 | 2 | -0/+29 | |
| | ||||||
* | ingest: check for URL blocklist and cookie URL patterns on every hop | Bryan Newbold | 2020-08-11 | 1 | -0/+13 | |
| | ||||||
* | refactor: force_get -> force_simple_get | Bryan Newbold | 2020-08-11 | 2 | -8/+8 | |
| | | | | | For clarity. The SPNv2 API hasn't changed, just changing the variable/parameter name. | |||||
* | html: extract eprints PDF url (eg, ub.uni-heidelberg.de) | Bryan Newbold | 2020-08-11 | 1 | -0/+2 | |
| | ||||||
* | extract PDF urls for e-periodica.ch | Bryan Newbold | 2020-08-10 | 1 | -0/+6 | |
| | ||||||
* | more bad sha1 | Bryan Newbold | 2020-08-10 | 1 | -0/+2 | |
| | ||||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-10 | 1 | -0/+1 | |
| | ||||||
* | add hkvalidate.perfdrive.com to domain blocklist | Bryan Newbold | 2020-08-08 | 1 | -0/+3 | |
| | ||||||
* | fix tests passing str as HTML | Bryan Newbold | 2020-08-08 | 1 | -3/+3 | |
| | ||||||
* | add more HTML extraction tricks | Bryan Newbold | 2020-08-08 | 1 | -2/+29 | |
| | ||||||
* | rwth-aachen.de HTML extract, and a generic URL guess method | Bryan Newbold | 2020-08-08 | 1 | -0/+15 | |
| | ||||||
* | another PDF hash to skip | Bryan Newbold | 2020-08-08 | 1 | -0/+1 | |
| | ||||||
* | another sha1 | Bryan Newbold | 2020-08-07 | 1 | -0/+1 | |
| | ||||||
* | another sha1 | Bryan Newbold | 2020-08-06 | 1 | -0/+1 | |
| | ||||||
* | and more bad sha1 | Bryan Newbold | 2020-08-06 | 1 | -0/+3 | |
| | ||||||
* | more pdfextract skip sha1hex | Bryan Newbold | 2020-08-06 | 1 | -9/+12 | |
| | ||||||
* | grobid+pdftext missing catch-up commands | Bryan Newbold | 2020-08-05 | 5 | -10/+150 | |
| | ||||||
* | commit stats from a couple weeks back | Bryan Newbold | 2020-08-05 | 1 | -0/+347 | |
| | ||||||
* | sql stats commands updates | Bryan Newbold | 2020-08-05 | 1 | -2/+2 | |
| | ||||||
* | MAG ingest follow-up notes | Bryan Newbold | 2020-08-05 | 1 | -0/+194 | |
| | ||||||
* | more bad PDF sha1; print sha1 before poppler extract | Bryan Newbold | 2020-08-05 | 1 | -0/+7 | |
| | ||||||
* | spn2: skip js behavior (experiment) | Bryan Newbold | 2020-08-05 | 1 | -0/+1 | |
| | | | | | Hoping this will increase crawling throughput with little-to-no impact on fidelity. | |||||
* | SPN2: ensure not fetching outlinks | Bryan Newbold | 2020-08-05 | 1 | -0/+1 | |
| | ||||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-04 | 1 | -0/+1 | |
| | ||||||
* | another PDF sha1hex | Bryan Newbold | 2020-07-27 | 1 | -0/+1 | |
| | ||||||
* | yet another 'bad' PDF sha1hex | Bryan Newbold | 2020-07-27 | 1 | -0/+1 | |
| | ||||||
* | use new SPNv2 'skip_first_archive' param | Bryan Newbold | 2020-07-22 | 1 | -0/+1 | |
| | | | | For speed and efficiency. | |||||
* | MAG 2020-07 ingest notes | Bryan Newbold | 2020-07-08 | 1 | -0/+159 | |
| | ||||||
* | add more slow PDF hashes | Bryan Newbold | 2020-07-05 | 1 | -0/+2 | |
| | ||||||
* | add another bad PDF sha1hex | Bryan Newbold | 2020-07-02 | 1 | -0/+1 | |
| | ||||||
* | seaweedfs proposal: fix typos and wording | Martin Czygan | 2020-07-01 | 1 | -9/+11 | |
| | ||||||
* | another bad PDF SHA-1 | Bryan Newbold | 2020-06-30 | 1 | -0/+1 | |
| | ||||||
* | hack to unblock thumbnail processing pipeline | Bryan Newbold | 2020-06-29 | 1 | -0/+16 | |
| | | | | | | Some PDFs taking 10+ minutes to process, causing kafka exceptions and consumer churn. Not sure why kafka json pusher timeouts are not catching these. | |||||
* | customize timeout per worker; 120sec for pdf-extract | Bryan Newbold | 2020-06-29 | 3 | -2/+4 | |
| | | | | | This is a stab-in-the-dark attempt to resolve long timeouts with this worker in prod. | |||||
* | handle empty fetched blob | Bryan Newbold | 2020-06-27 | 1 | -1/+6 | |
| | ||||||
* | CDX KeyError as WaybackError from fetch worker | Bryan Newbold | 2020-06-26 | 1 | -1/+1 | |
| | ||||||
* | handle None 'metadata' field correctly | Bryan Newbold | 2020-06-26 | 1 | -1/+1 | |
| | ||||||
* | handle non-success case of parsing extract from JSON/dict | Bryan Newbold | 2020-06-26 | 1 | -1/+1 | |
| | ||||||
* | report revisit non-200 as a WaybackError | Bryan Newbold | 2020-06-26 | 1 | -7/+7 | |
| |