aboutsummaryrefslogtreecommitdiffstats
path: root/python
Commit message (Collapse)AuthorAgeFilesLines
* catch ApiValueError in some generic API callsBryan Newbold2020-03-252-2/+14
| | | | | | | | | | | | | The motivation for this change is to handle bogus revision IDs in URLs, which were causing 500 errors not 400 errors. Eg: https://qa.fatcat.wiki/file/rev/5d5d5162-b676-4f0a-968f-e19dadeaf96e%2B2019-11-27%2B13:49:51%2B0%2B6 I have no idea where these URLs are actually coming from, but they should be 4xx not 5xx. Investigating made me realize there is a whole category of ApiValueError exceptions we were not catching and should have been.
* cleanup unused code in fatcat_harvest.pyBryan Newbold2020-03-231-7/+0
|
* jalc: avoid meaningless pages valuesBryan Newbold2020-03-231-4/+8
|
* datacite: add year sanity restrictionsbnewbold2020-03-231-0/+7
| | | | | | | | | Example of entities with bogus years: https://fatcat.wiki/release/search?q=doi_registrar%3Adatacite+year%3A%3E2100 We can do a clean-up task, but first need to prevent creation of new bad metadata.
* pubmed: handle multiple ReferenceListBryan Newbold2020-03-203-1/+222
| | | | | | | This resolves a situation noticed in prod where we were only importing/updating a single reference per article. Includes a regression test.
* pubmed: update many more metadata fieldsBryan Newbold2020-03-191-0/+22
| | | | | | | In particular, with daily updates in most cases the DOI will be registered first, then the entity updated with PMID when that is available. Often the pubmed metadata will be more complete, with abstracts etc, and we'll want those improvements.
* crossref: skip stub OUP titleBryan Newbold2020-03-191-0/+8
| | | | | | It seems like OUP pre-registers DOIs with this place-holder title, then updates the Crossref metdata when the paper is actually published. We should wait until the real title is available before creating an entity.
* ingest: always try some lancet journalsBryan Newbold2020-03-191-0/+3
|
* container lookup: link to issn portal searchMartin Czygan2020-03-181-4/+3
| | | | | | | | | Example: https://fatcat.wiki/container/lookup?issnl=2007-1248 - the linked https://portal.issn.org/2007-1248 yields a "page not found", while search yields results: https://portal.issn.org/api/search?search[]=MUST=allissnbis=2007-1248
* update front-page statsBryan Newbold2020-03-171-3/+3
|
* Merge branch 'martin-kafka-bs4-import' into 'master'Martin Czygan2020-03-1010-43/+428
|\ | | | | | | | | pubmed and arxiv harvest preparations See merge request webgroup/fatcat!28
| * common: use smaller batch size since XML parsing may be slowMartin Czygan2020-03-101-1/+1
| | | | | | | | | | | | | | | | Address kafka tradeoff between long and short time-outs. Shorter time-outs would facilitate > consumer group re-balances and other consumer group state changes [...] in a reasonable human time-frame.
| * pubmed: log to stderrMartin Czygan2020-03-101-1/+1
| |
| * pubmed: move mapping generation out of fetch_dateMartin Czygan2020-03-102-7/+10
| | | | | | | | | | * fetch_date will fail on missing mapping * adjust tests (test will require access to pubmed ftp)
| * harvest: fix imports from HarvestPubmedWorker cleanupMartin Czygan2020-03-102-4/+4
| |
| * pubmed: citations is a bit more preciseMartin Czygan2020-03-091-1/+1
| | | | | | | | | | > Each day, NLM produces update files that include new, revised and deleted citations. -- ftp://ftp.ncbi.nlm.nih.gov/pubmed/updatefiles/README.txt
| * pubmed: we sync from FTPMartin Czygan2020-03-091-1/+1
| |
| * oaipmh: HarvestPubmedWorker obsoleted by PubmedFTPWorkerMartin Czygan2020-03-091-34/+0
| |
| * fatcat_import: address potential hanging, if stdin is emptyMartin Czygan2020-03-091-0/+2
| |
| * more pubmed adjustmentsMartin Czygan2020-02-226-71/+197
| | | | | | | | | | * regenerate map in continuous mode * add tests
| * pubmed ftp: fix urlMartin Czygan2020-02-191-4/+6
| |
| * pubmed ftp harvest and KafkaBs4XmlPusherMartin Czygan2020-02-196-21/+307
| | | | | | | | | | | | | | * add PubmedFTPWorker * utils are currently stored alongside pubmed (e.g. ftpretr, xmlstream) but may live elsewhere, as they are more generic * add KafkaBs4XmlPusher
* | add --force-crawl flag to ingest toolBryan Newbold2020-03-021-0/+5
| |
* | pipenv: lock authlib to less than v0.13; rebuild lock fileBryan Newbold2020-02-282-112/+109
| |
* | Merge branch 'bnewbold-elastic-v03b'Bryan Newbold2020-02-2610-190/+465
|\ \
| * | improve is_oa flag accuracyBryan Newbold2020-02-261-8/+4
| | | | | | | | | | | | | | | | | | Particularly, the ezb=green match seems mostly incorrect. Note that pmcid being assigned could still be in an embargo window?
| * | fix fatcat_transform state filtersBryan Newbold2020-02-261-4/+4
| | |
| * | bulk ES transform: skip non-active entitiesBryan Newbold2020-02-261-0/+8
| | |
| * | ES container last tweaksBryan Newbold2020-02-261-0/+3
| | |
| * | ES release: last minor tweaksBryan Newbold2020-02-261-2/+2
| | |
| * | ES updates: fix tests to accept archive.org in host/domainBryan Newbold2020-02-141-2/+3
| | |
| * | ES files: don't remove archive.org domains/hostsBryan Newbold2020-02-071-5/+0
| | |
| * | ES releases: host/domain fixesBryan Newbold2020-01-312-2/+5
| | |
| * | pipenv: lock zipp version to work around python3.6 requirementBryan Newbold2020-01-302-7/+20
| | |
| * | fix release es transform missing 'issue'Bryan Newbold2020-01-301-0/+1
| | |
| * | add upper-case work-around from kibana map joinBryan Newbold2020-01-301-0/+1
| | |
| * | tweak file ES archive.org domain trackingBryan Newbold2020-01-301-0/+6
| | |
| * | implement host+domain parsing for file ES transformBryan Newbold2020-01-302-13/+8
| | |
| * | pipenv: add tldextract (url parser) and update depsBryan Newbold2020-01-302-136/+159
| | |
| * | fix ES file schema plural field namesBryan Newbold2020-01-292-5/+4
| | |
| * | new biblio-only general searchBryan Newbold2020-01-291-2/+2
| | | | | | | | | | | | The other fields are now "copy_to" the merged biblio field.
| * | elastic schema fixesBryan Newbold2020-01-291-0/+5
| | |
| * | add country to v03b release schemaBryan Newbold2020-01-291-0/+2
| | |
| * | actually implement changelog transformBryan Newbold2020-01-292-18/+68
| | |
| * | fix some transform bugs, add some testsBryan Newbold2020-01-296-13/+48
| | |
| * | ES release schema updatesBryan Newbold2020-01-291-5/+76
| | |
| * | container ES schema changesBryan Newbold2020-01-291-16/+18
| | |
| * | first implementation of ES file schemaBryan Newbold2020-01-293-3/+69
| | | | | | | | | | | | | | | Includes a trivial test and transform, but not any workers or doc updates.
* | | Merge branch 'bnewbold-more-ingest' into 'master'bnewbold2020-02-251-1/+37
|\ \ \ | | | | | | | | | | | | | | | | entity worker: ingest more Datacite releases; filter some out See merge request webgroup/fatcat!29
| * | | entity worker: ingest more releasesBryan Newbold2020-02-221-1/+37
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | If release is a dataset or image, don't do a pdf ingest request. If release is a datacite DOI, and release_type is a "document", crawl regardless of is_oa detection. This is mostly to crawl repositories (institutional or subject).