Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | ingest: skip JSTOR DOI prefixes | Bryan Newbold | 2020-10-23 | 1 | -0/+3 |
| | |||||
* | ingest: fix WaybackContentError typo | Bryan Newbold | 2020-10-21 | 1 | -1/+1 |
| | |||||
* | ingest: add a check for blocked-cookie before trying PDF url extraction | Bryan Newbold | 2020-10-21 | 1 | -0/+11 |
| | |||||
* | differential wayback-error from wayback-content-error | Bryan Newbold | 2020-10-21 | 1 | -1/+5 |
| | | | | | | The motivation here is to distinguish errors due to current content in wayback (eg, in WARCs) from operational errors (eg, wayback machine is down, or network failures/disruption). | ||||
* | ingest: add a cdx-error slowdown delay | Bryan Newbold | 2020-10-19 | 1 | -0/+3 |
| | |||||
* | ingest: fix old_failure datetime | Bryan Newbold | 2020-10-19 | 1 | -1/+1 |
| | |||||
* | ingest: try SPNv2 for no-capture and old failures | Bryan Newbold | 2020-10-19 | 1 | -1/+5 |
| | |||||
* | ingest: disable soft404 and non-hit SPNv2 retries | Bryan Newbold | 2020-10-19 | 1 | -4/+5 |
| | | | | | | This might have made sense at some point, but I had forgotten about this code path and it makes no sense now. Has been resulting in very many extraneous SPN requests. | ||||
* | store no-capture URLs in terminal_url | Bryan Newbold | 2020-10-12 | 1 | -2/+2 |
| | |||||
* | ingest: small bugfix to print pdfextract status on SUCCESS | Bryan Newbold | 2020-09-17 | 1 | -1/+1 |
| | |||||
* | ingest: treat text/xml as XHTML in pdf ingest | Bryan Newbold | 2020-09-14 | 1 | -1/+1 |
| | |||||
* | additional loginwall patterns | Bryan Newbold | 2020-08-11 | 1 | -0/+2 |
| | |||||
* | ingest: actually use force_get flag with SPN | Bryan Newbold | 2020-08-11 | 1 | -0/+13 |
| | | | | | | The code path was there, but wasn't actually flagging in our most popular daily domains yet. Hopefully will make a big difference in SPN throughput. | ||||
* | check for simple URL patterns that are usually paywalls or loginwalls | Bryan Newbold | 2020-08-11 | 1 | -0/+11 |
| | |||||
* | ingest: check for URL blocklist and cookie URL patterns on every hop | Bryan Newbold | 2020-08-11 | 1 | -0/+13 |
| | |||||
* | refactor: force_get -> force_simple_get | Bryan Newbold | 2020-08-11 | 1 | -3/+3 |
| | | | | | For clarity. The SPNv2 API hasn't changed, just changing the variable/parameter name. | ||||
* | add hkvalidate.perfdrive.com to domain blocklist | Bryan Newbold | 2020-08-08 | 1 | -0/+3 |
| | |||||
* | pdfextract support in ingest worker | Bryan Newbold | 2020-06-25 | 1 | -1/+35 |
| | |||||
* | workers: refactor to pass key to process() | Bryan Newbold | 2020-06-17 | 1 | -2/+2 |
| | |||||
* | ingest: don't 'want' non-PDF ingest | Bryan Newbold | 2020-04-30 | 1 | -0/+5 |
| | |||||
* | timeout message implementation for GROBID and ingest workers | Bryan Newbold | 2020-04-27 | 1 | -0/+9 |
| | |||||
* | ingest: block another large domain (and DOI prefix) | Bryan Newbold | 2020-03-27 | 1 | -0/+2 |
| | |||||
* | ingest: clean_url() in more places | Bryan Newbold | 2020-03-23 | 1 | -0/+1 |
| | | | | | | Some 'cdx-error' results were due to URLs with ':' after the hostname or trailing newline ("\n") characters in the URL. This attempts to work around this categroy of error. | ||||
* | implement (unused) force_get flag for SPN2 | Bryan Newbold | 2020-03-18 | 1 | -1/+15 |
| | | | | | | | | | I hoped this feature would make it possible to crawl journals.lww.com PDFs, because the token URLs work with `wget`, but it still doesn't seem to work. Maybe because of user agent? Anyways, this feature might be useful for crawling efficiency, so adding to master. | ||||
* | url cleaning (canonicalization) for ingest base_url | Bryan Newbold | 2020-03-10 | 1 | -2/+6 |
| | | | | | | | | | | | As mentioned in comment, this first version does not re-write the URL in the `base_url` field. If we did so, then ingest_request rows would not SQL JOIN to ingest_file_result rows, which we wouldn't want. In the future, behaviour should maybe be to refuse to process URLs that aren't clean (eg, if base_url != clean_url(base_url)) and return a 'bad-url' status or soemthing. Then we would only accept clean URLs in both tables, and clear out all old/bad URLs with a cleanup script. | ||||
* | ingest: make content-decoding more robust | Bryan Newbold | 2020-03-03 | 1 | -1/+2 |
| | |||||
* | make gzip content-encoding path more robust | Bryan Newbold | 2020-03-03 | 1 | -1/+10 |
| | |||||
* | ingest: crude content-encoding support | Bryan Newbold | 2020-03-02 | 1 | -1/+19 |
| | | | | | | This perhaps should be handled in IA wrapper tool directly, instead of in ingest code. Or really, possibly a bug in wayback python library or SPN? | ||||
* | ingest: add force_recrawl flag to skip historical wayback lookup | Bryan Newbold | 2020-03-02 | 1 | -3/+5 |
| | |||||
* | remove protocols.io octet-stream hack | Bryan Newbold | 2020-03-02 | 1 | -6/+2 |
| | |||||
* | ingest: narrow xhtml filter | Bryan Newbold | 2020-02-25 | 1 | -1/+1 |
| | |||||
* | ingest: include better terminal URL/status_code/dt | Bryan Newbold | 2020-02-22 | 1 | -0/+8 |
| | | | | Was getting a lot of "last hit" metadata for these columns. | ||||
* | ingest: skip more non-pdf, non-paper domains | Bryan Newbold | 2020-02-22 | 1 | -0/+9 |
| | |||||
* | block springer page-one domain | Bryan Newbold | 2020-01-28 | 1 | -0/+3 |
| | |||||
* | re-enable figshare and zenodo crawling | Bryan Newbold | 2020-01-21 | 1 | -8/+0 |
| | | | | For daily imports | ||||
* | ingest: check for null-body before file_meta | Bryan Newbold | 2020-01-21 | 1 | -0/+3 |
| | | | | | gen_file_metadata raises an assert error if body is None (or false-y in general) | ||||
* | add SKIP log line for skip-url-blocklist path | Bryan Newbold | 2020-01-17 | 1 | -0/+1 |
| | |||||
* | ingest: add URL blocklist feature | Bryan Newbold | 2020-01-17 | 1 | -4/+32 |
| | | | | And, temporarily, block zenodo and figshare. | ||||
* | clarify ingest result schema and semantics | Bryan Newbold | 2020-01-15 | 1 | -4/+11 |
| | |||||
* | pass through revisit_cdx | Bryan Newbold | 2020-01-15 | 1 | -0/+3 |
| | |||||
* | ingest: sketch out more of how 'existing' path would work | Bryan Newbold | 2020-01-14 | 1 | -8/+22 |
| | |||||
* | ingest: check existing GROBID; also push results to sink | Bryan Newbold | 2020-01-14 | 1 | -4/+22 |
| | |||||
* | filter out archive.org and web.archive.org (until implemented) | Bryan Newbold | 2020-01-14 | 1 | -1/+12 |
| | |||||
* | basic FTP ingest support; revist record resolution | Bryan Newbold | 2020-01-14 | 1 | -1/+1 |
| | | | | | | | - supporting revisits means more wayback hits (fewer crawls) => faster - ... but this is only partial support. will also need to work through sandcrawler db schema, etc. current status should be safe to merge/use. - ftp support via treating an ftp hit as a 200 | ||||
* | better print() output | Bryan Newbold | 2020-01-10 | 1 | -1/+1 |
| | |||||
* | fix trivial typo | Bryan Newbold | 2020-01-10 | 1 | -1/+1 |
| | |||||
* | hack/workaround for protocols.io octet PDFs | Bryan Newbold | 2020-01-10 | 1 | -2/+4 |
| | |||||
* | limit length of error messages | Bryan Newbold | 2020-01-10 | 1 | -4/+4 |
| | |||||
* | more general ingest teaks and affordances | Bryan Newbold | 2020-01-10 | 1 | -10/+24 |
| | |||||
* | improve ingest robustness (for legacy requests) | Bryan Newbold | 2020-01-10 | 1 | -6/+12 |
| |