aboutsummaryrefslogtreecommitdiffstats
path: root/python
Commit message (Collapse)AuthorAgeFilesLines
* ingest: bulk workers don't hit SPNv2Bryan Newbold2020-02-131-0/+2
|
* pdftrio fixes from testingBryan Newbold2020-02-131-3/+9
|
* move pdf_trio results back under key in JSON/KafkaBryan Newbold2020-02-132-7/+31
|
* pdftrio: small fixes from testingBryan Newbold2020-02-121-2/+2
|
* pdftrio basic python codeBryan Newbold2020-02-127-1/+393
| | | | This is basically just a copy/paste of GROBID code, only simpler!
* add ingestrequest_row2json.pyBryan Newbold2020-02-051-0/+48
|
* fix persist bug where ingest_request_source not savedBryan Newbold2020-02-051-0/+1
|
* fix bug where ingest_request extra fields not persistedBryan Newbold2020-02-051-1/+2
|
* handle alternative dt format in WARC headersBryan Newbold2020-02-051-2/+4
| | | | | If there is a UTC timestamp, with trailing 'Z' indicating timezone, that is valid but increases string length by one.
* decrease SPNv2 polling timeout to 3 minutesBryan Newbold2020-02-051-2/+2
|
* improvements to reliability from prod testingBryan Newbold2020-02-032-7/+20
|
* hack-y backoff ingest attemptBryan Newbold2020-02-032-3/+26
| | | | | | | | | | | | | | | The goal here is to have SPNv2 requests backoff when we get back-pressure (usually caused by some sessions taking too long). Lack of proper back-pressure is making it hard to turn up parallelism. This is a hack because we still timeout and drop the slow request. A better way is probably to have a background thread run, while the KafkaPusher thread does polling. Maybe with timeouts to detect slow processing (greater than 30 seconds?) and only pause/resume in that case. This would also make taking batches easier. Unlike the existing code, however, the parallelism needs to happen at the Pusher level to do the polling (Kafka) and "await" (for all worker threads to complete) correctly.
* grobid petabox: fix fetch body/contentBryan Newbold2020-02-031-1/+1
|
* wayback: try to resolve HTTPException due to many HTTP headersBryan Newbold2020-02-021-1/+9
| | | | | | | | | This is withing GWB wayback code. Trying two things: - bump default max headers from 100 to 1000 in the (global?) http.client module itself. I didn't think through whether we would expect this to actually work - catch the exception, record it, move on
* sandcrawler_worker: ingest worker distinct consumer groupsBryan Newbold2020-01-291-1/+3
| | | | | | I'm in the process of resetting these consumer groups, so might as well take the opportunity to split by topic and use the new canonical naming format.
* grobid worker: catch PetaboxError alsoBryan Newbold2020-01-281-2/+2
|
* worker kafka setting tweaksBryan Newbold2020-01-281-2/+4
| | | | These are all attempts to get kafka workers operating more smoothly.
* make grobid-extract worker batch size 1Bryan Newbold2020-01-281-0/+1
| | | | | This is part of attempts to fix Kafka errors that look like they might be timeouts.
* workers: yes, poll is necessaryBryan Newbold2020-01-281-1/+1
|
* grobid worker: always set a key in responseBryan Newbold2020-01-281-4/+25
| | | | | | | | | We have key-based compaction enabled for the GROBID output topic. This means it is an error to public to that topic without a key set. Hopefully this change will end these errors, which look like: KafkaError{code=INVALID_MSG,val=2,str="Broker: Invalid message"}
* fix kafka worker partition-specific errorBryan Newbold2020-01-281-1/+1
|
* fix WaybackError exception formatingBryan Newbold2020-01-281-1/+1
|
* fix elif syntax errorBryan Newbold2020-01-281-1/+1
|
* block springer page-one domainBryan Newbold2020-01-281-0/+3
|
* clarify petabox fetch behaviorBryan Newbold2020-01-281-3/+6
|
* re-enable figshare and zenodo crawlingBryan Newbold2020-01-211-8/+0
| | | | For daily imports
* persist grobid: actually, status_code is requiredBryan Newbold2020-01-212-3/+10
| | | | | | | Instead of working around when missing, force it to exist but skip in database insert section. Disk mode still needs to check if blank.
* ingest: check for null-body before file_metaBryan Newbold2020-01-211-0/+3
| | | | | gen_file_metadata raises an assert error if body is None (or false-y in general)
* wayback: replay redirects have X-Archive-Redirect-ReasonBryan Newbold2020-01-211-2/+4
|
* persist: work around GROBID timeouts with no status_codeBryan Newbold2020-01-212-3/+3
|
* grobid: fix error_msg typo; set status_code for timeoutsBryan Newbold2020-01-211-1/+2
|
* add 200 second timeout to GROBID requestsBryan Newbold2020-01-171-8/+15
|
* add SKIP log line for skip-url-blocklist pathBryan Newbold2020-01-171-0/+1
|
* ingest: add URL blocklist featureBryan Newbold2020-01-172-4/+49
| | | | And, temporarily, block zenodo and figshare.
* handle UnicodeDecodeError in the other GET instanceBryan Newbold2020-01-151-0/+2
|
* increase SPNv2 polling timeout to 4 minutesBryan Newbold2020-01-151-1/+3
|
* make failed replay fetch an error, not assert errorBryan Newbold2020-01-151-1/+2
|
* improve sentry reporting with 'release' git hashBryan Newbold2020-01-152-2/+5
|
* wayback replay: catch UnicodeDecodeErrorBryan Newbold2020-01-151-0/+2
| | | | | | | | In prod, ran in to a redirect URL like: b'/web/20200116043630id_/https://mediarep.org/bitstream/handle/doc/1127/Barth\xe9l\xe9my_2015_Life_and_Technology.pdf;jsessionid=A9EFB2798846F5E14A8473BBFD6AB46C?sequence=1' which broke requests.
* persist: fix dupe field copyingBryan Newbold2020-01-151-1/+8
| | | | | | In testing hit: AttributeError: 'str' object has no attribute 'get'
* persist worker: implement updated ingest result semanticsBryan Newbold2020-01-152-12/+17
|
* clarify ingest result schema and semanticsBryan Newbold2020-01-153-7/+32
|
* pass through revisit_cdxBryan Newbold2020-01-152-5/+21
|
* fix revisit resolutionBryan Newbold2020-01-151-4/+12
| | | | | Returns the *original* CDX record, but keeps the terminal_url and terminal_sha1hex info.
* add postgrest checks to test mocksBryan Newbold2020-01-141-1/+9
|
* tests: don't use localhost as a responses mock hostBryan Newbold2020-01-142-6/+6
|
* bulk ingest file request topic supportBryan Newbold2020-01-141-1/+7
|
* ingest: sketch out more of how 'existing' path would workBryan Newbold2020-01-141-8/+22
|
* ingest: check existing GROBID; also push results to sinkBryan Newbold2020-01-141-4/+22
|
* ingest persist skips 'existing' ingest resultsBryan Newbold2020-01-141-0/+3
|