Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | new dependencies for HTML metadata parsing | Bryan Newbold | 2020-10-27 | 2 | -926/+184 |
| | |||||
* | start HTML metadata extraction code | Bryan Newbold | 2020-10-27 | 6 | -0/+2858 |
| | |||||
* | ingest: skip JSTOR DOI prefixes | Bryan Newbold | 2020-10-23 | 1 | -0/+3 |
| | |||||
* | Revert "reimplement worker timeout with multiprocessing" | Bryan Newbold | 2020-10-22 | 1 | -17/+23 |
| | | | | | | | This reverts commit 031f51752e79dbdde47bbc95fe6b3600c9ec711a. Didn't actually work when testing; can't pickle the Kafka Producer object (and probably other objects) | ||||
* | ingest: decrease CDX timeout retries again | Bryan Newbold | 2020-10-22 | 1 | -1/+1 |
| | |||||
* | make: coverage report and HTML in one go | Bryan Newbold | 2020-10-22 | 2 | -5/+2 |
| | |||||
* | reimplement worker timeout with multiprocessing | Bryan Newbold | 2020-10-22 | 1 | -23/+17 |
| | |||||
* | SQL: update weekly/quarterly ingest retry scripts | Bryan Newbold | 2020-10-21 | 5 | -18/+119 |
| | |||||
* | sql stats: larger limits (more complete lists) | Bryan Newbold | 2020-10-21 | 1 | -8/+8 |
| | |||||
* | ingest: fix WaybackContentError typo | Bryan Newbold | 2020-10-21 | 1 | -1/+1 |
| | |||||
* | ingest: add a check for blocked-cookie before trying PDF url extraction | Bryan Newbold | 2020-10-21 | 1 | -0/+11 |
| | |||||
* | differential wayback-error from wayback-content-error | Bryan Newbold | 2020-10-21 | 7 | -18/+22 |
| | | | | | | The motivation here is to distinguish errors due to current content in wayback (eg, in WARCs) from operational errors (eg, wayback machine is down, or network failures/disruption). | ||||
* | persist PDF extraction in ingest pipeline | Bryan Newbold | 2020-10-20 | 1 | -4/+16 |
| | | | | | Ooof, didn't realize that this wasn't happening. Explains a lot of missing thumbnails in scholar! | ||||
* | ingest: add a cdx-error slowdown delay | Bryan Newbold | 2020-10-19 | 1 | -0/+3 |
| | |||||
* | SPN CDX delay now seems reasonable; increase to 40sec to catch most | Bryan Newbold | 2020-10-19 | 1 | -1/+1 |
| | |||||
* | ingest: fix old_failure datetime | Bryan Newbold | 2020-10-19 | 1 | -1/+1 |
| | |||||
* | CDX: when retrying, do so every 3 seconds up to limit | Bryan Newbold | 2020-10-19 | 1 | -5/+9 |
| | |||||
* | ingest: try SPNv2 for no-capture and old failures | Bryan Newbold | 2020-10-19 | 1 | -1/+5 |
| | |||||
* | SPN: more verbose status logging | Bryan Newbold | 2020-10-19 | 1 | -0/+4 |
| | |||||
* | ingest: disable soft404 and non-hit SPNv2 retries | Bryan Newbold | 2020-10-19 | 1 | -4/+5 |
| | | | | | | This might have made sense at some point, but I had forgotten about this code path and it makes no sense now. Has been resulting in very many extraneous SPN requests. | ||||
* | CDX: revert post-SPN CDX lookup retry to 10 seconds | Bryan Newbold | 2020-10-19 | 1 | -1/+1 |
| | | | | | Hoping to have many fewer SPN requests and issues, so willing to wait longer for each. | ||||
* | ingest: catch wayback-fail-after-SPN as separate status | Bryan Newbold | 2020-10-19 | 1 | -4/+17 |
| | |||||
* | SPN: better log line when starting a request | Bryan Newbold | 2020-10-19 | 1 | -0/+1 |
| | |||||
* | SPN: look for non-200 CDX responses | Bryan Newbold | 2020-10-19 | 1 | -1/+1 |
| | | | | Suspect that this has been the source of many `spn2-cdx-lookup-failure` | ||||
* | SPN: better check for partial URLs returned | Bryan Newbold | 2020-10-19 | 1 | -2/+2 |
| | |||||
* | CDX fetch: more permissive fuzzy/normalization check | Bryan Newbold | 2020-10-19 | 1 | -3/+9 |
| | | | | | | | This might the source of some `spn2-cdx-lookup-failure`. Wayback/CDX does this check via full-on SURT, with many more changes, and potentially we should be doing that here as well. | ||||
* | ingest: experimentally reduce CDX API retry delay | Bryan Newbold | 2020-10-17 | 1 | -1/+1 |
| | | | | | | | This code path is only working about 1/7 times in production. Going to try with a much shorter retry delay and see if we get no success with that. Considering also just disabling this attempt all together and relying on retries after hours/days. | ||||
* | notes on 2020-09 re-ingest passes | Bryan Newbold | 2020-10-17 | 1 | -0/+197 |
| | |||||
* | OA DOIs: partial notes | Bryan Newbold | 2020-10-17 | 1 | -0/+218 |
| | |||||
* | notes/status on daily ingest | Bryan Newbold | 2020-10-17 | 1 | -0/+193 |
| | |||||
* | update SQL ingest monitoring commands to be past-month by default | Bryan Newbold | 2020-10-17 | 1 | -5/+5 |
| | |||||
* | ingest: handle cookieAbsent and partial SPNv2 URL reponse cases better | Bryan Newbold | 2020-10-17 | 1 | -0/+31 |
| | |||||
* | and another sha1 | Bryan Newbold | 2020-10-13 | 1 | -0/+1 |
| | |||||
* | another day, another bad PDF sha1 | Bryan Newbold | 2020-10-13 | 1 | -0/+1 |
| | |||||
* | store no-capture URLs in terminal_url | Bryan Newbold | 2020-10-12 | 3 | -3/+39 |
| | |||||
* | start 2020-10 ingest notes | Bryan Newbold | 2020-10-11 | 1 | -0/+42 |
| | |||||
* | update unpaywall 2020-04 notes | Bryan Newbold | 2020-10-11 | 1 | -0/+32 |
| | |||||
* | OAI-PMH ingest progress timestamps | Bryan Newbold | 2020-10-11 | 1 | -0/+13 |
| | |||||
* | another bad PDF sha1 | Bryan Newbold | 2020-10-11 | 1 | -0/+1 |
| | |||||
* | yet more bad sha1 PDFs to skip | Bryan Newbold | 2020-10-10 | 1 | -0/+20 |
| | |||||
* | notes on file_meta task (from august) | Bryan Newbold | 2020-10-01 | 1 | -0/+66 |
| | |||||
* | dump_file_meta helper | Bryan Newbold | 2020-10-01 | 1 | -0/+12 |
| | |||||
* | update README (public) | Bryan Newbold | 2020-10-01 | 1 | -17/+27 |
| | |||||
* | ingest: small bugfix to print pdfextract status on SUCCESS | Bryan Newbold | 2020-09-17 | 1 | -1/+1 |
| | |||||
* | more bad PDF sha1 | Bryan Newbold | 2020-09-17 | 1 | -0/+2 |
| | |||||
* | yet another broken PDF (sha1) | Bryan Newbold | 2020-09-16 | 1 | -0/+1 |
| | |||||
* | html: handle JMIR URL pattern | Bryan Newbold | 2020-09-15 | 1 | -0/+6 |
| | |||||
* | updated sandcrawler-db stats | Bryan Newbold | 2020-09-15 | 2 | -6/+346 |
| | |||||
* | skip citation_pdf_url if it is a link loop | Bryan Newbold | 2020-09-14 | 1 | -2/+8 |
| | | | | This may help get around link-loop errors for a specific version of OJS | ||||
* | html parse: add another generic fulltext pattern | Bryan Newbold | 2020-09-14 | 1 | -1/+10 |
| |