| Commit message (Collapse) | Author | Age | Files | Lines |
|\
| |
| |
| |
| | |
catch ApiValueError in some generic API calls
See merge request webgroup/fatcat!35
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
The motivation for this change is to handle bogus revision IDs in URLs,
which were causing 500 errors not 400 errors. Eg:
https://qa.fatcat.wiki/file/rev/5d5d5162-b676-4f0a-968f-e19dadeaf96e%2B2019-11-27%2B13:49:51%2B0%2B6
I have no idea where these URLs are actually coming from, but they
should be 4xx not 5xx.
Investigating made me realize there is a whole category of ApiValueError
exceptions we were not catching and should have been.
|
|\ \
| | |
| | |
| | |
| | | |
improve citeproc/CSL web interface
See merge request webgroup/fatcat!36
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
This tries to show the citeproc (bibtext, MLA, CSL-JSON) options for
more releases, and not show the links when they would break.
The primary motivation here is to work around two exceptions being
thrown in prod every day (according to sentry):
KeyError: 'role'
ValueError: CLS requries some surname (family name)
I'm guessing these are mostly coming from crawlers following the
citeproc links on release landing pages.
|
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | |
| | | |
Works around a bug in production:
AttributeError: 'NoneType' object has no attribute 'replace'
(datacite.py:724)
NOTE: there are no tests for this code path
|
| |/
|/| |
|
|\ \
| |/
|/|
| |
| | |
notes: pubmed backfill (03/2020)
See merge request webgroup/fatcat!34
|
|/ |
|
| |
|
| |
|
|\
| |
| |
| |
| | |
datacite: add year sanity restrictions
See merge request webgroup/fatcat!33
|
|/
|
|
|
|
|
|
|
| |
Example of entities with bogus years:
https://fatcat.wiki/release/search?q=doi_registrar%3Adatacite+year%3A%3E2100
We can do a clean-up task, but first need to prevent creation of new bad
metadata.
|
| |
|
|
|
|
|
|
|
| |
This resolves a situation noticed in prod where we were only
importing/updating a single reference per article.
Includes a regression test.
|
|
|
|
|
|
|
| |
In particular, with daily updates in most cases the DOI will be
registered first, then the entity updated with PMID when that is
available. Often the pubmed metadata will be more complete, with
abstracts etc, and we'll want those improvements.
|
|
|
|
|
|
| |
It seems like OUP pre-registers DOIs with this place-holder title, then
updates the Crossref metdata when the paper is actually published. We
should wait until the real title is available before creating an entity.
|
| |
|
|\
| |
| |
| |
| | |
container lookup: link to issn portal search
See merge request webgroup/fatcat!32
|
|/
|
|
|
|
|
|
|
| |
Example:
https://fatcat.wiki/container/lookup?issnl=2007-1248 - the linked
https://portal.issn.org/2007-1248 yields a "page not found", while
search yields results:
https://portal.issn.org/api/search?search[]=MUST=allissnbis=2007-1248
|
|\
| |
| |
| |
| | |
update front-page stats
See merge request webgroup/fatcat!31
|
|/ |
|
| |
|
|\
| |
| |
| |
| | |
pubmed and arxiv harvest preparations
See merge request webgroup/fatcat!28
|
| |
| |
| |
| |
| |
| |
| |
| | |
Address kafka tradeoff between long and short time-outs. Shorter
time-outs would facilitate
> consumer group re-balances and other consumer group state changes
[...] in a reasonable human time-frame.
|
| | |
|
| |
| |
| |
| |
| | |
* fetch_date will fail on missing mapping
* adjust tests (test will require access to pubmed ftp)
|
| | |
|
| |
| |
| |
| |
| | |
> Each day, NLM produces update files that include new, revised and
deleted citations. -- ftp://ftp.ncbi.nlm.nih.gov/pubmed/updatefiles/README.txt
|
| | |
|
| | |
|
| | |
|
| |
| |
| |
| |
| | |
* regenerate map in continuous mode
* add tests
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
* add PubmedFTPWorker
* utils are currently stored alongside pubmed (e.g. ftpretr, xmlstream)
but may live elsewhere, as they are more generic
* add KafkaBs4XmlPusher
|
| | |
|
| | |
|
| | |
|
|\ \ |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Particularly, the ezb=green match seems mostly incorrect.
Note that pmcid being assigned could still be in an embargo window?
|
| | |
| | |
| | |
| | |
| | | |
- smaller batch sizes to prevent esbulk errors
- file transform/index
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | | |
|
| | |
| | |
| | |
| | |
| | |
| | | |
Because DOIs are pseudo-structured (prefix, and often structure within
the publisher-controlled area), I suspect we will in fact be wanting to
do analytics over these strings.
|
| | | |
|
| | |
| | |
| | |
| | | |
Eg, for fast "unique count"
|
| | | |
|
| | | |
|