Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | update persist worker invocation to use batches | Bryan Newbold | 2020-01-02 | 1 | -15/+55 |
| | |||||
* | set mimetype when PUT to minio | Bryan Newbold | 2020-01-02 | 1 | -0/+4 |
| | |||||
* | fix DB import counting | Bryan Newbold | 2020-01-02 | 1 | -4/+5 |
| | |||||
* | include an example environment file | Bryan Newbold | 2020-01-02 | 1 | -0/+2 |
| | |||||
* | teixml2json test update for skipping null JSON keys | Bryan Newbold | 2020-01-02 | 1 | -10/+1 |
| | |||||
* | fix small errors found by pylint | Bryan Newbold | 2020-01-02 | 2 | -1/+2 |
| | |||||
* | fix sandcrawler persist workers | Bryan Newbold | 2020-01-02 | 2 | -8/+37 |
| | |||||
* | filter ingest results to not have key conflicts within batch | Bryan Newbold | 2020-01-02 | 1 | -1/+16 |
| | | | | | This handles a corner case with ON CONFLICT ... DO UPDATE where you can't do multiple such updates in the same batch transaction. | ||||
* | db: fancy insert/update separation using postgres xmax | Bryan Newbold | 2020-01-02 | 2 | -24/+45 |
| | |||||
* | add PersistGrobidDiskWorker | Bryan Newbold | 2020-01-02 | 2 | -0/+60 |
| | | | | To help with making dumps directly from Kafka (eg, for partner delivery) | ||||
* | flush out minio helper, add to grobid persist | Bryan Newbold | 2020-01-02 | 3 | -24/+91 |
| | |||||
* | update minio README | Bryan Newbold | 2020-01-02 | 1 | -10/+42 |
| | |||||
* | implement counts properly for persist workers | Bryan Newbold | 2020-01-02 | 1 | -15/+19 |
| | |||||
* | improve DB helpers | Bryan Newbold | 2020-01-02 | 1 | -26/+81 |
| | | | | | - return insert/update row counts - implement ON CONFLICT ... DO UPDATE on some tables | ||||
* | be more parsimonious with GROBID metadata | Bryan Newbold | 2020-01-02 | 2 | -3/+20 |
| | | | | | Because these are getting persisted in database (as well as kafka), don't write out empty keys. | ||||
* | start work on DB connector and minio client | Bryan Newbold | 2020-01-02 | 2 | -0/+200 |
| | |||||
* | have JsonLinePusher continue on JSON decode errors (but count) | Bryan Newbold | 2020-01-02 | 1 | -1/+5 |
| | |||||
* | start work on persist workers and tool | Bryan Newbold | 2020-01-02 | 3 | -5/+336 |
| | |||||
* | yet more tweaks to ingest proposal | Bryan Newbold | 2020-01-02 | 1 | -3/+2 |
| | |||||
* | SQL docs update for diesel change | Bryan Newbold | 2020-01-02 | 2 | -0/+48 |
| | |||||
* | move SQL schema to diesel migration pattern | Bryan Newbold | 2020-01-02 | 5 | -70/+157 |
| | |||||
* | hadoop job log rename and update | Bryan Newbold | 2019-12-27 | 1 | -0/+25 |
| | |||||
* | update job log with pig runs | Bryan Newbold | 2019-12-26 | 1 | -0/+10 |
| | |||||
* | update TODO | Bryan Newbold | 2019-12-26 | 1 | -1/+7 |
| | |||||
* | kafka topics: compress api-datacite | Bryan Newbold | 2019-12-24 | 1 | -1/+1 |
| | |||||
* | basic arabesque2ingestrequest script | Bryan Newbold | 2019-12-24 | 1 | -0/+69 |
| | |||||
* | commit grobid_tool transform mode | Bryan Newbold | 2019-12-22 | 1 | -0/+27 |
| | | | | Had some stale code on aitio with this change I forgot to commit. Oops! | ||||
* | pig: first rev of join-cdx-sha1 script | Bryan Newbold | 2019-12-22 | 3 | -0/+91 |
| | |||||
* | pig: move count_lines helper to pighelper.py | Bryan Newbold | 2019-12-22 | 3 | -7/+6 |
| | |||||
* | refactor: use print(..., file=sys.stderr) | Bryan Newbold | 2019-12-18 | 5 | -32/+34 |
| | | | | Should use logging soon, but this seems more idiomatic in the meanwhile. | ||||
* | refactor: sort keys in JSON output | Bryan Newbold | 2019-12-18 | 4 | -6/+7 |
| | | | | This makes debugging by tailing Kafka topics a lot more readable | ||||
* | refactor: improve argparse usage | Bryan Newbold | 2019-12-18 | 5 | -13/+27 |
| | | | | | use ArgumentDefaultsHelpFormatter and add help messages to all sub-commands | ||||
* | update ingest proposal source/link naming | Bryan Newbold | 2019-12-13 | 2 | -17/+27 |
| | |||||
* | sql schema change proposals | Bryan Newbold | 2019-12-11 | 1 | -0/+40 |
| | |||||
* | pdftotext proposal | Bryan Newbold | 2019-12-11 | 1 | -0/+123 |
| | |||||
* | add some GROBID metadata schema docs to SQL schema | Bryan Newbold | 2019-12-11 | 1 | -0/+11 |
| | |||||
* | update ingest proposal | Bryan Newbold | 2019-12-11 | 1 | -11/+145 |
| | |||||
* | fixes for large GROBID result skip | Bryan Newbold | 2019-12-02 | 1 | -2/+2 |
| | |||||
* | count empty blobs as 'failed' instead of crashing | Bryan Newbold | 2019-12-01 | 1 | -1/+2 |
| | | | | Might be better to record an artificial kafka response instead? | ||||
* | cleanup unused import | Bryan Newbold | 2019-12-01 | 1 | -1/+0 |
| | |||||
* | filter out very large GROBID XML bodies | Bryan Newbold | 2019-12-01 | 1 | -0/+6 |
| | | | | | | | | | | This is to prevent Kafka MSG_SIZE_TOO_LARGE publish errors. We should probably bump this in the future. Open problems: hand-coding this size number isn't good, need to update in two places. Shouldn't filter out for non-Kafka sinks. Might still exist a corner-case where JSON encoded XML is larger than XML character string, due to encoding (eg, for unicode characters). | ||||
* | updated re-GROBID job log entry | Bryan Newbold | 2019-11-15 | 1 | -0/+31 |
| | |||||
* | CI: make some jobs manual | Bryan Newbold | 2019-11-15 | 2 | -6/+12 |
| | | | | | Scalding test is broken :( But we aren't even using that code much these days. | ||||
* | handle wayback fetch redirect loop in ingest code | Bryan Newbold | 2019-11-14 | 1 | -2/+5 |
| | |||||
* | bump kafka max poll interval for consumers | Bryan Newbold | 2019-11-14 | 1 | -2/+2 |
| | | | | | The ingest worker keeps timing out at just over 5 minutes, so bump it just a bit. | ||||
* | handle WaybackError during ingest | Bryan Newbold | 2019-11-14 | 1 | -0/+4 |
| | |||||
* | handle SPNv1 redirect loop | Bryan Newbold | 2019-11-14 | 1 | -0/+2 |
| | |||||
* | handle SPNv2 polling timeout | Bryan Newbold | 2019-11-14 | 1 | -6/+10 |
| | |||||
* | update ingest-file batch size to 1 | Bryan Newbold | 2019-11-14 | 2 | -4/+4 |
| | | | | | | | | Was defaulting to 100, which I think was resulting in lots of consumer group timeouts, resulting in UNKNOWN_MEMBER_ID errors. Will probably switch back to batches of 10 or so, but multi-processing or some other concurrent dispatch/processing. | ||||
* | start of hrmars.com ingest support | Bryan Newbold | 2019-11-14 | 2 | -2/+7 |
| |