aboutsummaryrefslogtreecommitdiffstats
path: root/python/sandcrawler/ingest.py
Commit message (Collapse)AuthorAgeFilesLines
* pdfextract support in ingest workerBryan Newbold2020-06-251-1/+35
|
* workers: refactor to pass key to process()Bryan Newbold2020-06-171-2/+2
|
* ingest: don't 'want' non-PDF ingestBryan Newbold2020-04-301-0/+5
|
* timeout message implementation for GROBID and ingest workersBryan Newbold2020-04-271-0/+9
|
* ingest: block another large domain (and DOI prefix)Bryan Newbold2020-03-271-0/+2
|
* ingest: clean_url() in more placesBryan Newbold2020-03-231-0/+1
| | | | | | Some 'cdx-error' results were due to URLs with ':' after the hostname or trailing newline ("\n") characters in the URL. This attempts to work around this categroy of error.
* implement (unused) force_get flag for SPN2Bryan Newbold2020-03-181-1/+15
| | | | | | | | | I hoped this feature would make it possible to crawl journals.lww.com PDFs, because the token URLs work with `wget`, but it still doesn't seem to work. Maybe because of user agent? Anyways, this feature might be useful for crawling efficiency, so adding to master.
* url cleaning (canonicalization) for ingest base_urlBryan Newbold2020-03-101-2/+6
| | | | | | | | | | | As mentioned in comment, this first version does not re-write the URL in the `base_url` field. If we did so, then ingest_request rows would not SQL JOIN to ingest_file_result rows, which we wouldn't want. In the future, behaviour should maybe be to refuse to process URLs that aren't clean (eg, if base_url != clean_url(base_url)) and return a 'bad-url' status or soemthing. Then we would only accept clean URLs in both tables, and clear out all old/bad URLs with a cleanup script.
* ingest: make content-decoding more robustBryan Newbold2020-03-031-1/+2
|
* make gzip content-encoding path more robustBryan Newbold2020-03-031-1/+10
|
* ingest: crude content-encoding supportBryan Newbold2020-03-021-1/+19
| | | | | | This perhaps should be handled in IA wrapper tool directly, instead of in ingest code. Or really, possibly a bug in wayback python library or SPN?
* ingest: add force_recrawl flag to skip historical wayback lookupBryan Newbold2020-03-021-3/+5
|
* remove protocols.io octet-stream hackBryan Newbold2020-03-021-6/+2
|
* ingest: narrow xhtml filterBryan Newbold2020-02-251-1/+1
|
* ingest: include better terminal URL/status_code/dtBryan Newbold2020-02-221-0/+8
| | | | Was getting a lot of "last hit" metadata for these columns.
* ingest: skip more non-pdf, non-paper domainsBryan Newbold2020-02-221-0/+9
|
* block springer page-one domainBryan Newbold2020-01-281-0/+3
|
* re-enable figshare and zenodo crawlingBryan Newbold2020-01-211-8/+0
| | | | For daily imports
* ingest: check for null-body before file_metaBryan Newbold2020-01-211-0/+3
| | | | | gen_file_metadata raises an assert error if body is None (or false-y in general)
* add SKIP log line for skip-url-blocklist pathBryan Newbold2020-01-171-0/+1
|
* ingest: add URL blocklist featureBryan Newbold2020-01-171-4/+32
| | | | And, temporarily, block zenodo and figshare.
* clarify ingest result schema and semanticsBryan Newbold2020-01-151-4/+11
|
* pass through revisit_cdxBryan Newbold2020-01-151-0/+3
|
* ingest: sketch out more of how 'existing' path would workBryan Newbold2020-01-141-8/+22
|
* ingest: check existing GROBID; also push results to sinkBryan Newbold2020-01-141-4/+22
|
* filter out archive.org and web.archive.org (until implemented)Bryan Newbold2020-01-141-1/+12
|
* basic FTP ingest support; revist record resolutionBryan Newbold2020-01-141-1/+1
| | | | | | | - supporting revisits means more wayback hits (fewer crawls) => faster - ... but this is only partial support. will also need to work through sandcrawler db schema, etc. current status should be safe to merge/use. - ftp support via treating an ftp hit as a 200
* better print() outputBryan Newbold2020-01-101-1/+1
|
* fix trivial typoBryan Newbold2020-01-101-1/+1
|
* hack/workaround for protocols.io octet PDFsBryan Newbold2020-01-101-2/+4
|
* limit length of error messagesBryan Newbold2020-01-101-4/+4
|
* more general ingest teaks and affordancesBryan Newbold2020-01-101-10/+24
|
* improve ingest robustness (for legacy requests)Bryan Newbold2020-01-101-6/+12
|
* support forwarding url types other than pdf_urlBryan Newbold2020-01-091-4/+5
|
* refactor ingest to a loop, allowing multiple hopsBryan Newbold2020-01-091-25/+48
|
* lots of progress on wayback refactoringBryan Newbold2020-01-091-11/+15
| | | | | | - too much to list - canonical flags to control crawling - cdx_to_dict helper
* wrap up basic (locally testable) ingest refactorBryan Newbold2020-01-091-159/+196
|
* remove SPNv1 code pathsBryan Newbold2020-01-071-30/+24
|
* refactor: use print(..., file=sys.stderr)Bryan Newbold2019-12-181-4/+4
| | | | Should use logging soon, but this seems more idiomatic in the meanwhile.
* handle wayback fetch redirect loop in ingest codeBryan Newbold2019-11-141-2/+5
|
* handle WaybackError during ingestBryan Newbold2019-11-141-0/+4
|
* start of hrmars.com ingest supportBryan Newbold2019-11-141-2/+5
|
* treat failure to get terminal capture as a SavePageNowErrorBryan Newbold2019-11-131-1/+1
|
* handle wayback client return status correctlyBryan Newbold2019-11-131-2/+2
|
* allow way more errors in SPN pathBryan Newbold2019-11-131-2/+11
|
* fix lint errorsBryan Newbold2019-11-131-1/+1
|
* improve ingest worker remote failure behaviorBryan Newbold2019-11-131-5/+12
|
* rename FileIngestWorkerBryan Newbold2019-11-131-5/+9
|
* more progress on file ingestBryan Newbold2019-11-131-10/+37
|
* much progress on file ingest pathBryan Newbold2019-10-221-0/+150