Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | tweak kafka topic names and seaweedfs layout | Bryan Newbold | 2020-06-17 | 2 | -10/+12 |
| | |||||
* | make process_pdf() more robust to parse errors | Bryan Newbold | 2020-06-17 | 1 | -5/+29 |
| | |||||
* | note about text layout with pdf extraction | Bryan Newbold | 2020-06-17 | 1 | -0/+8 |
| | |||||
* | lint fixes | Bryan Newbold | 2020-06-17 | 4 | -7/+9 |
| | |||||
* | rename pdf tools to pdfextract | Bryan Newbold | 2020-06-17 | 3 | -0/+0 |
| | |||||
* | coverage-html makefile target | Bryan Newbold | 2020-06-17 | 1 | -0/+3 |
| | |||||
* | ignore PIL deprecation warnings | Bryan Newbold | 2020-06-17 | 1 | -0/+1 |
| | |||||
* | partial test coverage of pdf extract worker | Bryan Newbold | 2020-06-17 | 2 | -6/+70 |
| | |||||
* | fix coverage command | Bryan Newbold | 2020-06-17 | 2 | -2/+4 |
| | |||||
* | update grobid2json with type annotations | Bryan Newbold | 2020-06-17 | 1 | -94/+110 |
| | |||||
* | remove unused common.py | Bryan Newbold | 2020-06-17 | 2 | -139/+0 |
| | |||||
* | better DeprecationWarning filters | Bryan Newbold | 2020-06-17 | 1 | -3/+4 |
| | |||||
* | update Makefile from fatcat-scholar tweaks/commands | Bryan Newbold | 2020-06-17 | 1 | -3/+21 |
| | |||||
* | WIP on pdf_tool.py | Bryan Newbold | 2020-06-17 | 1 | -0/+137 |
| | |||||
* | add new pdf workers/persisters | Bryan Newbold | 2020-06-17 | 4 | -2/+214 |
| | |||||
* | pdf: mypy and typo fixes | Bryan Newbold | 2020-06-17 | 2 | -15/+22 |
| | |||||
* | workers: refactor to pass key to process() | Bryan Newbold | 2020-06-17 | 6 | -20/+28 |
| | |||||
* | pipenv: correct poppler; update lockfile | Bryan Newbold | 2020-06-16 | 2 | -76/+255 |
| | |||||
* | pipenv: flake8, pytype, black | Bryan Newbold | 2020-06-16 | 1 | -0/+7 |
| | |||||
* | pipenv: pillow and poppler (for PDF extraction) | Bryan Newbold | 2020-06-16 | 1 | -0/+2 |
| | |||||
* | initial work on PDF extraction worker | Bryan Newbold | 2020-06-16 | 2 | -1/+158 |
| | | | | | This worker fetches full PDFs, then extracts thumbnails, raw text, and PDF metadata. Similar to GROBID worker. | ||||
* | pdf_thumbnail script: demonstrate PDF thumbnail generation | Bryan Newbold | 2020-06-16 | 1 | -0/+35 |
| | |||||
* | refactor worker fetch code into wrapper class | Bryan Newbold | 2020-06-16 | 3 | -141/+111 |
| | |||||
* | rename KafkaGrobidSink -> KafkaCompressSink | Bryan Newbold | 2020-06-16 | 3 | -3/+3 |
| | |||||
* | remove deprecated kafka_grobid.py worker | Bryan Newbold | 2020-05-26 | 1 | -331/+0 |
| | | | | | | All use of pykafka was refactored to use the confluent library some time ago. And all kafka workers have been using the newer sandcrawler style worker for some time. | ||||
* | pipenv: remove old python3.5 cruft; add mypy | Bryan Newbold | 2020-05-26 | 2 | -185/+196 |
| | |||||
* | start a python Makefile | Bryan Newbold | 2020-05-19 | 1 | -0/+15 |
| | |||||
* | handle UnboundLocalError in HTML parsing | Bryan Newbold | 2020-05-19 | 1 | -1/+4 |
| | |||||
* | first iteration of oai2ingestrequest script | Bryan Newbold | 2020-05-05 | 1 | -0/+137 |
| | |||||
* | hotfix for html meta extract codepath | Bryan Newbold | 2020-05-03 | 1 | -1/+1 |
| | | | | Didn't test last commit before pushing; bad Bryan! | ||||
* | ingest: handle partial citation_pdf_url tag | Bryan Newbold | 2020-05-03 | 1 | -0/+3 |
| | | | | | | | | Eg: https://www.cureus.com/articles/29935-a-nomogram-for-the-rapid-prediction-of-hematocrit-following-blood-loss-and-fluid-shifts-in-neonates-infants-and-adults Has: <meta name="citation_pdf_url"/> | ||||
* | workers: add missing want() dataflow path | Bryan Newbold | 2020-04-30 | 1 | -0/+9 |
| | |||||
* | ingest: don't 'want' non-PDF ingest | Bryan Newbold | 2020-04-30 | 1 | -0/+5 |
| | |||||
* | timeouts: don't push through None error messages | Bryan Newbold | 2020-04-29 | 1 | -2/+2 |
| | |||||
* | timeout message implementation for GROBID and ingest workers | Bryan Newbold | 2020-04-27 | 2 | -0/+18 |
| | |||||
* | worker timeout wrapper, and use for kafka | Bryan Newbold | 2020-04-27 | 1 | -2/+40 |
| | |||||
* | fix KeyError in HTML PDF URL extraction | Bryan Newbold | 2020-04-17 | 1 | -1/+1 |
| | |||||
* | persist: only GROBID updates file_meta, not file-result | Bryan Newbold | 2020-04-16 | 1 | -1/+1 |
| | | | | | | | | | The hope here is to reduce deadlocks in production (on aitio). As context, we are only doing "updates" until the entire file_meta table is filled in with full metadata anyways; updates are wasteful of resources, and most inserts we have seen the file before, so should be doing "DO NOTHING" if the SHA1 is already in the table. | ||||
* | batch/multiprocess for ZipfilePusher | Bryan Newbold | 2020-04-16 | 2 | -5/+26 |
| | |||||
* | pipenv: update to python3.7 | Bryan Newbold | 2020-04-15 | 2 | -197/+202 |
| | |||||
* | COVID-19 chinese paper ingest | Bryan Newbold | 2020-04-15 | 1 | -0/+83 |
| | |||||
* | ingest: quick hack to capture CNKI outlinks | Bryan Newbold | 2020-04-13 | 1 | -2/+9 |
| | |||||
* | html: attempt at CNKI href extraction | Bryan Newbold | 2020-04-13 | 1 | -0/+11 |
| | |||||
* | unpaywall2ingestrequest: canonicalize URL | Bryan Newbold | 2020-04-07 | 1 | -1/+9 |
| | |||||
* | ia: set User-Agent for replay fetch from wayback | Bryan Newbold | 2020-03-29 | 1 | -0/+5 |
| | | | | | | | Did this for all the other "client" helpers, but forgot to for wayback replay. Was starting to get "445" errors from wayback. | ||||
* | ingest: block another large domain (and DOI prefix) | Bryan Newbold | 2020-03-27 | 1 | -0/+2 |
| | |||||
* | ingest: better spn2 pending error code | Bryan Newbold | 2020-03-27 | 1 | -0/+2 |
| | |||||
* | ingest: eurosurveillance PDF parser | Bryan Newbold | 2020-03-25 | 1 | -0/+11 |
| | |||||
* | ia: more conservative use of clean_url() | Bryan Newbold | 2020-03-24 | 1 | -3/+5 |
| | | | | | | Fixes AttributeError: 'NoneType' object has no attribute 'strip' Seen in production on the lookup_resource code path. | ||||
* | ingest: clean_url() in more places | Bryan Newbold | 2020-03-23 | 3 | -1/+6 |
| | | | | | | Some 'cdx-error' results were due to URLs with ':' after the hostname or trailing newline ("\n") characters in the URL. This attempts to work around this categroy of error. |