aboutsummaryrefslogtreecommitdiffstats
path: root/python
Commit message (Collapse)AuthorAgeFilesLines
* fix coverage commandBryan Newbold2020-06-172-2/+4
|
* update grobid2json with type annotationsBryan Newbold2020-06-171-94/+110
|
* remove unused common.pyBryan Newbold2020-06-172-139/+0
|
* better DeprecationWarning filtersBryan Newbold2020-06-171-3/+4
|
* update Makefile from fatcat-scholar tweaks/commandsBryan Newbold2020-06-171-3/+21
|
* WIP on pdf_tool.pyBryan Newbold2020-06-171-0/+137
|
* add new pdf workers/persistersBryan Newbold2020-06-174-2/+214
|
* pdf: mypy and typo fixesBryan Newbold2020-06-172-15/+22
|
* workers: refactor to pass key to process()Bryan Newbold2020-06-176-20/+28
|
* pipenv: correct poppler; update lockfileBryan Newbold2020-06-162-76/+255
|
* pipenv: flake8, pytype, blackBryan Newbold2020-06-161-0/+7
|
* pipenv: pillow and poppler (for PDF extraction)Bryan Newbold2020-06-161-0/+2
|
* initial work on PDF extraction workerBryan Newbold2020-06-162-1/+158
| | | | | This worker fetches full PDFs, then extracts thumbnails, raw text, and PDF metadata. Similar to GROBID worker.
* pdf_thumbnail script: demonstrate PDF thumbnail generationBryan Newbold2020-06-161-0/+35
|
* refactor worker fetch code into wrapper classBryan Newbold2020-06-163-141/+111
|
* rename KafkaGrobidSink -> KafkaCompressSinkBryan Newbold2020-06-163-3/+3
|
* remove deprecated kafka_grobid.py workerBryan Newbold2020-05-261-331/+0
| | | | | | All use of pykafka was refactored to use the confluent library some time ago. And all kafka workers have been using the newer sandcrawler style worker for some time.
* pipenv: remove old python3.5 cruft; add mypyBryan Newbold2020-05-262-185/+196
|
* start a python MakefileBryan Newbold2020-05-191-0/+15
|
* handle UnboundLocalError in HTML parsingBryan Newbold2020-05-191-1/+4
|
* first iteration of oai2ingestrequest scriptBryan Newbold2020-05-051-0/+137
|
* hotfix for html meta extract codepathBryan Newbold2020-05-031-1/+1
| | | | Didn't test last commit before pushing; bad Bryan!
* ingest: handle partial citation_pdf_url tagBryan Newbold2020-05-031-0/+3
| | | | | | | | Eg: https://www.cureus.com/articles/29935-a-nomogram-for-the-rapid-prediction-of-hematocrit-following-blood-loss-and-fluid-shifts-in-neonates-infants-and-adults Has: <meta name="citation_pdf_url"/>
* workers: add missing want() dataflow pathBryan Newbold2020-04-301-0/+9
|
* ingest: don't 'want' non-PDF ingestBryan Newbold2020-04-301-0/+5
|
* timeouts: don't push through None error messagesBryan Newbold2020-04-291-2/+2
|
* timeout message implementation for GROBID and ingest workersBryan Newbold2020-04-272-0/+18
|
* worker timeout wrapper, and use for kafkaBryan Newbold2020-04-271-2/+40
|
* fix KeyError in HTML PDF URL extractionBryan Newbold2020-04-171-1/+1
|
* persist: only GROBID updates file_meta, not file-resultBryan Newbold2020-04-161-1/+1
| | | | | | | | | The hope here is to reduce deadlocks in production (on aitio). As context, we are only doing "updates" until the entire file_meta table is filled in with full metadata anyways; updates are wasteful of resources, and most inserts we have seen the file before, so should be doing "DO NOTHING" if the SHA1 is already in the table.
* batch/multiprocess for ZipfilePusherBryan Newbold2020-04-162-5/+26
|
* pipenv: update to python3.7Bryan Newbold2020-04-152-197/+202
|
* COVID-19 chinese paper ingestBryan Newbold2020-04-151-0/+83
|
* ingest: quick hack to capture CNKI outlinksBryan Newbold2020-04-131-2/+9
|
* html: attempt at CNKI href extractionBryan Newbold2020-04-131-0/+11
|
* unpaywall2ingestrequest: canonicalize URLBryan Newbold2020-04-071-1/+9
|
* ia: set User-Agent for replay fetch from waybackBryan Newbold2020-03-291-0/+5
| | | | | | | Did this for all the other "client" helpers, but forgot to for wayback replay. Was starting to get "445" errors from wayback.
* ingest: block another large domain (and DOI prefix)Bryan Newbold2020-03-271-0/+2
|
* ingest: better spn2 pending error codeBryan Newbold2020-03-271-0/+2
|
* ingest: eurosurveillance PDF parserBryan Newbold2020-03-251-0/+11
|
* ia: more conservative use of clean_url()Bryan Newbold2020-03-241-3/+5
| | | | | | Fixes AttributeError: 'NoneType' object has no attribute 'strip' Seen in production on the lookup_resource code path.
* ingest: clean_url() in more placesBryan Newbold2020-03-233-1/+6
| | | | | | Some 'cdx-error' results were due to URLs with ':' after the hostname or trailing newline ("\n") characters in the URL. This attempts to work around this categroy of error.
* skip-db option also for workerBryan Newbold2020-03-191-0/+4
|
* persist grobid: add option to skip S3 uploadBryan Newbold2020-03-192-7/+14
| | | | | | | Motivation for this is that current S3 target (minio) is overloaded, with too many files on a single partition (80 million+). Going to look in to seaweedfs and other options, but for now stopping minio persist. Data is all stored in kafka anyways.
* ingest: log every URL (from ia code side)Bryan Newbold2020-03-181-0/+1
|
* implement (unused) force_get flag for SPN2Bryan Newbold2020-03-182-4/+19
| | | | | | | | | I hoped this feature would make it possible to crawl journals.lww.com PDFs, because the token URLs work with `wget`, but it still doesn't seem to work. Maybe because of user agent? Anyways, this feature might be useful for crawling efficiency, so adding to master.
* work around local redirect (resource.location)Bryan Newbold2020-03-171-1/+6
| | | | | | Some redirects are host-local. This patch crudely detects this (full-path redirects starting with "/" only), and appends the URL to the host of the original URL.
* Merge branch 'martin-abstract-class-process' into 'master'bnewbold2020-03-121-0/+6
|\ | | | | | | | | workers: add explicit process to base class See merge request webgroup/sandcrawler!25
| * workers: add explicit process to base classMartin Czygan2020-03-121-0/+6
| | | | | | | | | | | | | | | | As per https://docs.python.org/3/library/exceptions.html#NotImplementedError > In user defined base classes, abstract methods should raise this exception when they require derived classes to override the method [...].
* | pipenv: work around zipp issueBryan Newbold2020-03-102-4/+16
| |