aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* | update persist worker invocation to use batchesBryan Newbold2020-01-021-15/+55
| |
* | set mimetype when PUT to minioBryan Newbold2020-01-021-0/+4
| |
* | fix DB import countingBryan Newbold2020-01-021-4/+5
| |
* | include an example environment fileBryan Newbold2020-01-021-0/+2
| |
* | teixml2json test update for skipping null JSON keysBryan Newbold2020-01-021-10/+1
| |
* | fix small errors found by pylintBryan Newbold2020-01-022-1/+2
| |
* | fix sandcrawler persist workersBryan Newbold2020-01-022-8/+37
| |
* | filter ingest results to not have key conflicts within batchBryan Newbold2020-01-021-1/+16
| | | | | | | | | | This handles a corner case with ON CONFLICT ... DO UPDATE where you can't do multiple such updates in the same batch transaction.
* | db: fancy insert/update separation using postgres xmaxBryan Newbold2020-01-022-24/+45
| |
* | add PersistGrobidDiskWorkerBryan Newbold2020-01-022-0/+60
| | | | | | | | To help with making dumps directly from Kafka (eg, for partner delivery)
* | flush out minio helper, add to grobid persistBryan Newbold2020-01-023-24/+91
| |
* | update minio READMEBryan Newbold2020-01-021-10/+42
| |
* | implement counts properly for persist workersBryan Newbold2020-01-021-15/+19
| |
* | improve DB helpersBryan Newbold2020-01-021-26/+81
| | | | | | | | | | - return insert/update row counts - implement ON CONFLICT ... DO UPDATE on some tables
* | be more parsimonious with GROBID metadataBryan Newbold2020-01-022-3/+20
| | | | | | | | | | Because these are getting persisted in database (as well as kafka), don't write out empty keys.
* | start work on DB connector and minio clientBryan Newbold2020-01-022-0/+200
| |
* | have JsonLinePusher continue on JSON decode errors (but count)Bryan Newbold2020-01-021-1/+5
| |
* | start work on persist workers and toolBryan Newbold2020-01-023-5/+336
| |
* | yet more tweaks to ingest proposalBryan Newbold2020-01-021-3/+2
| |
* | SQL docs update for diesel changeBryan Newbold2020-01-022-0/+48
| |
* | move SQL schema to diesel migration patternBryan Newbold2020-01-025-70/+157
| |
* | hadoop job log rename and updateBryan Newbold2019-12-271-0/+25
| |
* | update job log with pig runsBryan Newbold2019-12-261-0/+10
| |
* | update TODOBryan Newbold2019-12-261-1/+7
| |
* | kafka topics: compress api-dataciteBryan Newbold2019-12-241-1/+1
| |
* | basic arabesque2ingestrequest scriptBryan Newbold2019-12-241-0/+69
| |
* | commit grobid_tool transform modeBryan Newbold2019-12-221-0/+27
| | | | | | | | Had some stale code on aitio with this change I forgot to commit. Oops!
* | pig: first rev of join-cdx-sha1 scriptBryan Newbold2019-12-223-0/+91
| |
* | pig: move count_lines helper to pighelper.pyBryan Newbold2019-12-223-7/+6
| |
* | refactor: use print(..., file=sys.stderr)Bryan Newbold2019-12-185-32/+34
| | | | | | | | Should use logging soon, but this seems more idiomatic in the meanwhile.
* | refactor: sort keys in JSON outputBryan Newbold2019-12-184-6/+7
| | | | | | | | This makes debugging by tailing Kafka topics a lot more readable
* | refactor: improve argparse usageBryan Newbold2019-12-185-13/+27
| | | | | | | | | | use ArgumentDefaultsHelpFormatter and add help messages to all sub-commands
* | update ingest proposal source/link namingBryan Newbold2019-12-132-17/+27
| |
* | sql schema change proposalsBryan Newbold2019-12-111-0/+40
| |
* | pdftotext proposalBryan Newbold2019-12-111-0/+123
| |
* | add some GROBID metadata schema docs to SQL schemaBryan Newbold2019-12-111-0/+11
| |
* | update ingest proposalBryan Newbold2019-12-111-11/+145
| |
* | fixes for large GROBID result skipBryan Newbold2019-12-021-2/+2
| |
* | count empty blobs as 'failed' instead of crashingBryan Newbold2019-12-011-1/+2
| | | | | | | | Might be better to record an artificial kafka response instead?
* | cleanup unused importBryan Newbold2019-12-011-1/+0
| |
* | filter out very large GROBID XML bodiesBryan Newbold2019-12-011-0/+6
| | | | | | | | | | | | | | | | | | | | This is to prevent Kafka MSG_SIZE_TOO_LARGE publish errors. We should probably bump this in the future. Open problems: hand-coding this size number isn't good, need to update in two places. Shouldn't filter out for non-Kafka sinks. Might still exist a corner-case where JSON encoded XML is larger than XML character string, due to encoding (eg, for unicode characters).
* | updated re-GROBID job log entryBryan Newbold2019-11-151-0/+31
| |
* | CI: make some jobs manualBryan Newbold2019-11-152-6/+12
| | | | | | | | | | Scalding test is broken :( But we aren't even using that code much these days.
* | handle wayback fetch redirect loop in ingest codeBryan Newbold2019-11-141-2/+5
| |
* | bump kafka max poll interval for consumersBryan Newbold2019-11-141-2/+2
| | | | | | | | | | The ingest worker keeps timing out at just over 5 minutes, so bump it just a bit.
* | handle WaybackError during ingestBryan Newbold2019-11-141-0/+4
| |
* | handle SPNv1 redirect loopBryan Newbold2019-11-141-0/+2
| |
* | handle SPNv2 polling timeoutBryan Newbold2019-11-141-6/+10
| |
* | update ingest-file batch size to 1Bryan Newbold2019-11-142-4/+4
| | | | | | | | | | | | | | | | Was defaulting to 100, which I think was resulting in lots of consumer group timeouts, resulting in UNKNOWN_MEMBER_ID errors. Will probably switch back to batches of 10 or so, but multi-processing or some other concurrent dispatch/processing.
* | start of hrmars.com ingest supportBryan Newbold2019-11-142-2/+7
| |