aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* ingest: don't 'want' non-PDF ingestBryan Newbold2020-04-301-0/+5
|
* Merge branch 'bnewbold-worker-timeout' into 'master'bnewbold2020-04-293-2/+58
|\ | | | | | | | | sandcrawler worker timeouts See merge request webgroup/sandcrawler!27
| * timeouts: don't push through None error messagesBryan Newbold2020-04-291-2/+2
| |
| * timeout message implementation for GROBID and ingest workersBryan Newbold2020-04-272-0/+18
| |
| * worker timeout wrapper, and use for kafkaBryan Newbold2020-04-271-2/+40
| |
* | NSQ for job task manager/schedulerBryan Newbold2020-04-281-0/+79
| |
* | update MAG crawl notesBryan Newbold2020-04-281-0/+71
|/
* kafka: more reblance notesBryan Newbold2020-04-241-1/+14
|
* CI: add missing libsnappy-dev and libsodium-dev system packagesBryan Newbold2020-04-241-1/+1
| | | | Whack-a-mole here...
* kafka: how to rebalance parititions between brokersBryan Newbold2020-04-241-0/+29
|
* CI: add deadsnakes and python3.7Bryan Newbold2020-04-211-2/+3
|
* fix KeyError in HTML PDF URL extractionBryan Newbold2020-04-171-1/+1
|
* persist: only GROBID updates file_meta, not file-resultBryan Newbold2020-04-161-1/+1
| | | | | | | | | The hope here is to reduce deadlocks in production (on aitio). As context, we are only doing "updates" until the entire file_meta table is filled in with full metadata anyways; updates are wasteful of resources, and most inserts we have seen the file before, so should be doing "DO NOTHING" if the SHA1 is already in the table.
* batch/multiprocess for ZipfilePusherBryan Newbold2020-04-162-5/+26
|
* update README for python3.7Bryan Newbold2020-04-151-1/+1
|
* pipenv: update to python3.7Bryan Newbold2020-04-152-197/+202
|
* COVID-19 chinese paper ingestBryan Newbold2020-04-152-0/+156
|
* 2020-04 unpaywall ingest (in progress)Bryan Newbold2020-04-151-0/+63
|
* 2020-04 datacite ingest (in progress)Bryan Newbold2020-04-151-0/+18
|
* partial notes on S2 crawl ingestBryan Newbold2020-04-151-0/+35
|
* ingest: quick hack to capture CNKI outlinksBryan Newbold2020-04-131-2/+9
|
* html: attempt at CNKI href extractionBryan Newbold2020-04-131-0/+11
|
* MAG import notesBryan Newbold2020-04-131-0/+13
|
* unpaywall2ingestrequest: canonicalize URLBryan Newbold2020-04-071-1/+9
|
* MAG 2020-03-04 ingest notes to dateBryan Newbold2020-04-061-0/+395
|
* more monitoring queriesBryan Newbold2020-03-301-5/+29
|
* unpaywall ingest notes updateBryan Newbold2020-03-301-0/+138
|
* ia: set User-Agent for replay fetch from waybackBryan Newbold2020-03-291-0/+5
| | | | | | | Did this for all the other "client" helpers, but forgot to for wayback replay. Was starting to get "445" errors from wayback.
* ingest: block another large domain (and DOI prefix)Bryan Newbold2020-03-271-0/+2
|
* ingest: better spn2 pending error codeBryan Newbold2020-03-271-0/+2
|
* ingest: eurosurveillance PDF parserBryan Newbold2020-03-251-0/+11
|
* ia: more conservative use of clean_url()Bryan Newbold2020-03-241-3/+5
| | | | | | Fixes AttributeError: 'NoneType' object has no attribute 'strip' Seen in production on the lookup_resource code path.
* ingest: clean_url() in more placesBryan Newbold2020-03-233-1/+6
| | | | | | Some 'cdx-error' results were due to URLs with ':' after the hostname or trailing newline ("\n") characters in the URL. This attempts to work around this categroy of error.
* Merge branch 'martin-pubmed-ftp-topic-docs' into 'master'bnewbold2020-03-201-1/+9
|\ | | | | | | | | topics: add pubmed ftp topic See merge request webgroup/sandcrawler!26
| * topics: add pubmed ftp topicMartin Czygan2020-03-121-1/+9
| | | | | | | | PubmedFTPWorker replaced OAI recently. This documents the new topic.
* | skip-db option also for workerBryan Newbold2020-03-191-0/+4
| |
* | persist grobid: add option to skip S3 uploadBryan Newbold2020-03-192-7/+14
| | | | | | | | | | | | | | Motivation for this is that current S3 target (minio) is overloaded, with too many files on a single partition (80 million+). Going to look in to seaweedfs and other options, but for now stopping minio persist. Data is all stored in kafka anyways.
* | ingest: log every URL (from ia code side)Bryan Newbold2020-03-181-0/+1
| |
* | implement (unused) force_get flag for SPN2Bryan Newbold2020-03-182-4/+19
| | | | | | | | | | | | | | | | | | I hoped this feature would make it possible to crawl journals.lww.com PDFs, because the token URLs work with `wget`, but it still doesn't seem to work. Maybe because of user agent? Anyways, this feature might be useful for crawling efficiency, so adding to master.
* | unpaywall large ingest notesBryan Newbold2020-03-171-0/+10
| |
* | make monitoring commands ingest_request local, not ingest_file_resultBryan Newbold2020-03-171-2/+2
| |
* | work around local redirect (resource.location)Bryan Newbold2020-03-171-1/+6
| | | | | | | | | | | | Some redirects are host-local. This patch crudely detects this (full-path redirects starting with "/" only), and appends the URL to the host of the original URL.
* | Merge branch 'martin-abstract-class-process' into 'master'bnewbold2020-03-121-0/+6
|\ \ | | | | | | | | | | | | workers: add explicit process to base class See merge request webgroup/sandcrawler!25
| * | workers: add explicit process to base classMartin Czygan2020-03-121-0/+6
| |/ | | | | | | | | | | | | | | As per https://docs.python.org/3/library/exceptions.html#NotImplementedError > In user defined base classes, abstract methods should raise this exception when they require derived classes to override the method [...].
* | pipenv: work around zipp issueBryan Newbold2020-03-102-4/+16
| |
* | pipenv: add urlcanon; update pipefile.lockBryan Newbold2020-03-102-209/+221
| |
* | DOI prefix example queries (SQL)Bryan Newbold2020-03-101-3/+17
| |
* | use local env in python scriptsBryan Newbold2020-03-103-3/+3
| | | | | | | | | | Without this correct/canonical shebang invocation, virtualenvs (pipenv) don't work.
* | url cleaning (canonicalization) for ingest base_urlBryan Newbold2020-03-104-4/+21
| | | | | | | | | | | | | | | | | | | | | | As mentioned in comment, this first version does not re-write the URL in the `base_url` field. If we did so, then ingest_request rows would not SQL JOIN to ingest_file_result rows, which we wouldn't want. In the future, behaviour should maybe be to refuse to process URLs that aren't clean (eg, if base_url != clean_url(base_url)) and return a 'bad-url' status or soemthing. Then we would only accept clean URLs in both tables, and clear out all old/bad URLs with a cleanup script.
* | ingest_file: --no-spn2 flag for single commandBryan Newbold2020-03-101-1/+6
| |