Commit message (Collapse) | Author | Age | Files | Lines | ||
---|---|---|---|---|---|---|
... | ||||||
* | SPN CDX delay now seems reasonable; increase to 40sec to catch most | Bryan Newbold | 2020-10-19 | 1 | -1/+1 | |
| | ||||||
* | ingest: fix old_failure datetime | Bryan Newbold | 2020-10-19 | 1 | -1/+1 | |
| | ||||||
* | CDX: when retrying, do so every 3 seconds up to limit | Bryan Newbold | 2020-10-19 | 1 | -5/+9 | |
| | ||||||
* | ingest: try SPNv2 for no-capture and old failures | Bryan Newbold | 2020-10-19 | 1 | -1/+5 | |
| | ||||||
* | SPN: more verbose status logging | Bryan Newbold | 2020-10-19 | 1 | -0/+4 | |
| | ||||||
* | ingest: disable soft404 and non-hit SPNv2 retries | Bryan Newbold | 2020-10-19 | 1 | -4/+5 | |
| | | | | | | This might have made sense at some point, but I had forgotten about this code path and it makes no sense now. Has been resulting in very many extraneous SPN requests. | |||||
* | CDX: revert post-SPN CDX lookup retry to 10 seconds | Bryan Newbold | 2020-10-19 | 1 | -1/+1 | |
| | | | | | Hoping to have many fewer SPN requests and issues, so willing to wait longer for each. | |||||
* | ingest: catch wayback-fail-after-SPN as separate status | Bryan Newbold | 2020-10-19 | 1 | -4/+17 | |
| | ||||||
* | SPN: better log line when starting a request | Bryan Newbold | 2020-10-19 | 1 | -0/+1 | |
| | ||||||
* | SPN: look for non-200 CDX responses | Bryan Newbold | 2020-10-19 | 1 | -1/+1 | |
| | | | | Suspect that this has been the source of many `spn2-cdx-lookup-failure` | |||||
* | SPN: better check for partial URLs returned | Bryan Newbold | 2020-10-19 | 1 | -2/+2 | |
| | ||||||
* | CDX fetch: more permissive fuzzy/normalization check | Bryan Newbold | 2020-10-19 | 1 | -3/+9 | |
| | | | | | | | This might the source of some `spn2-cdx-lookup-failure`. Wayback/CDX does this check via full-on SURT, with many more changes, and potentially we should be doing that here as well. | |||||
* | ingest: experimentally reduce CDX API retry delay | Bryan Newbold | 2020-10-17 | 1 | -1/+1 | |
| | | | | | | | This code path is only working about 1/7 times in production. Going to try with a much shorter retry delay and see if we get no success with that. Considering also just disabling this attempt all together and relying on retries after hours/days. | |||||
* | ingest: handle cookieAbsent and partial SPNv2 URL reponse cases better | Bryan Newbold | 2020-10-17 | 1 | -0/+31 | |
| | ||||||
* | and another sha1 | Bryan Newbold | 2020-10-13 | 1 | -0/+1 | |
| | ||||||
* | another day, another bad PDF sha1 | Bryan Newbold | 2020-10-13 | 1 | -0/+1 | |
| | ||||||
* | store no-capture URLs in terminal_url | Bryan Newbold | 2020-10-12 | 2 | -3/+3 | |
| | ||||||
* | another bad PDF sha1 | Bryan Newbold | 2020-10-11 | 1 | -0/+1 | |
| | ||||||
* | yet more bad sha1 PDFs to skip | Bryan Newbold | 2020-10-10 | 1 | -0/+20 | |
| | ||||||
* | ingest: small bugfix to print pdfextract status on SUCCESS | Bryan Newbold | 2020-09-17 | 1 | -1/+1 | |
| | ||||||
* | more bad PDF sha1 | Bryan Newbold | 2020-09-17 | 1 | -0/+2 | |
| | ||||||
* | yet another broken PDF (sha1) | Bryan Newbold | 2020-09-16 | 1 | -0/+1 | |
| | ||||||
* | html: handle JMIR URL pattern | Bryan Newbold | 2020-09-15 | 1 | -0/+6 | |
| | ||||||
* | skip citation_pdf_url if it is a link loop | Bryan Newbold | 2020-09-14 | 1 | -2/+8 | |
| | | | | This may help get around link-loop errors for a specific version of OJS | |||||
* | html parse: add another generic fulltext pattern | Bryan Newbold | 2020-09-14 | 1 | -1/+10 | |
| | ||||||
* | ingest: treat text/xml as XHTML in pdf ingest | Bryan Newbold | 2020-09-14 | 1 | -1/+1 | |
| | ||||||
* | more bad SHA1 PDF | Bryan Newbold | 2020-09-02 | 1 | -0/+2 | |
| | ||||||
* | another bad PDF sha1 | Bryan Newbold | 2020-09-01 | 1 | -0/+1 | |
| | ||||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-24 | 1 | -0/+1 | |
| | ||||||
* | html: handle embed with mangled 'src' attribute | Bryan Newbold | 2020-08-24 | 1 | -1/+1 | |
| | ||||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-17 | 1 | -0/+1 | |
| | ||||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-15 | 1 | -0/+1 | |
| | ||||||
* | more bad sha1 | Bryan Newbold | 2020-08-14 | 1 | -0/+1 | |
| | ||||||
* | yet more bad PDF sha1 | Bryan Newbold | 2020-08-14 | 1 | -0/+2 | |
| | ||||||
* | more bad SHA1 | Bryan Newbold | 2020-08-13 | 1 | -0/+2 | |
| | ||||||
* | yet another PDF sha1 | Bryan Newbold | 2020-08-12 | 1 | -0/+1 | |
| | ||||||
* | another bad sha1; maybe the last for this batch? | Bryan Newbold | 2020-08-12 | 1 | -0/+1 | |
| | ||||||
* | more bad sha1 | Bryan Newbold | 2020-08-11 | 1 | -0/+2 | |
| | ||||||
* | additional loginwall patterns | Bryan Newbold | 2020-08-11 | 1 | -0/+2 | |
| | ||||||
* | more SHA1 | Bryan Newbold | 2020-08-11 | 1 | -0/+2 | |
| | ||||||
* | Revert "ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)" | Bryan Newbold | 2020-08-11 | 1 | -1/+1 | |
| | | | | | | | This reverts commit 92bf9bc28ac0eacab2e06fa3b25b52f0882804c2. In practice, in prod, this resulted in much larger spn2-cdx-lookup-failure error rates. | |||||
* | ingest: reduce CDX retry_sleep to 3.0 sec (after SPN) | Bryan Newbold | 2020-08-11 | 1 | -1/+1 | |
| | | | | | | | | As we are moving towards just retrying entire ingest requests, we should probably just make this zero. But until then we should give SPN CDX a small chance to sync before giving up. This change expected to improve overall throughput. | |||||
* | ingest: actually use force_get flag with SPN | Bryan Newbold | 2020-08-11 | 1 | -0/+13 | |
| | | | | | | The code path was there, but wasn't actually flagging in our most popular daily domains yet. Hopefully will make a big difference in SPN throughput. | |||||
* | check for simple URL patterns that are usually paywalls or loginwalls | Bryan Newbold | 2020-08-11 | 2 | -0/+29 | |
| | ||||||
* | ingest: check for URL blocklist and cookie URL patterns on every hop | Bryan Newbold | 2020-08-11 | 1 | -0/+13 | |
| | ||||||
* | refactor: force_get -> force_simple_get | Bryan Newbold | 2020-08-11 | 2 | -8/+8 | |
| | | | | | For clarity. The SPNv2 API hasn't changed, just changing the variable/parameter name. | |||||
* | html: extract eprints PDF url (eg, ub.uni-heidelberg.de) | Bryan Newbold | 2020-08-11 | 1 | -0/+2 | |
| | ||||||
* | extract PDF urls for e-periodica.ch | Bryan Newbold | 2020-08-10 | 1 | -0/+6 | |
| | ||||||
* | more bad sha1 | Bryan Newbold | 2020-08-10 | 1 | -0/+2 | |
| | ||||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-10 | 1 | -0/+1 | |
| |