|  | Commit message (Collapse) | Author | Age | Files | Lines | 
|---|
| ... |  | 
| | | |  | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | 
| | | One of these (in ingest importer pipeline) is an actual bug, the others
are just changing the syntax to be more explicit/conservative.
The ingest importer bug seems to have resulted in some bad file match
imports; scale of impact is unknown. | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | | Until reviewing I didn't realize we were even doing this currently.
Hopefluly has not impacted too many imports, as almost all ingests use
an external identifer, so only those with identifers not in fatcat for
whatever reason. | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | | Up to now, we expected the description to be a string or list. Add
handling for int as well.
First appeared: Apr 22 19:58:39. | 
| |\ \  
| | | 
| | | 
| | | 
| | | 
| | | 
| | | | into 'master'
datacite: fix a raw name constraint violation
See merge request webgroup/fatcat!47 | 
| | |/  
| |   
| |   
| |   
| |   
| |   
| | | It was possible that contribs got added which had no raw name. One
example would be a name consisting of whitespace only.
This fix adds a final check for this case. | 
| | | |  | 
| |/  
|   
|   
|   
| | The API fetch update may be needed for old changelog entries in the
kafka feed. | 
| |\  
| | 
| | 
| | 
| | | py37 cleanups
See merge request webgroup/fatcat!44 | 
| | | |  | 
| |\ \  
| | | 
| | | 
| | | 
| | | | derive changelog worker from release worker
See merge request webgroup/fatcat!43 | 
| | |/  
| |   
| |   
| |   
| | | Early versions of changelog entries may not have all the fields
required for the current transform. | 
| | | 
| | 
| | 
| | 
| | | No partial docs (e.g. abstract), too generic components and entries, not
HTML blogs. | 
| |/  
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
| | according to release_rev.release_type, we have 29 values:
    fatcat_prod=# select release_type, count(release_type) from release_rev group by release_type;
       release_type    |   count
    -------------------+-----------
     abstract          |      2264
     article           |   6371076
     article-journal   | 101083841
     article-newspaper |     17062
     book              |   1676941
     chapter           |  13914854
     component         |     58990
     dataset           |   6860325
     editorial         |    133573
     entry             |   1628487
     graphic           |   1809471
     interview         |     19898
     legal_case        |      3581
     legislation       |      1626
     letter            |    275119
     paper-conference  |   6074669
     peer_review       |     30581
     post              |    245807
     post-weblog       |       135
     report            |   1010699
     retraction        |      1292
     review-book       |     96219
     software          |       316
     song              |     24027
     speech            |      4263
     standard          |    312364
     stub              |   1036813
     thesis            |    414397
                       |         0
    (29 rows) | 
| |\  
| | 
| | 
| | 
| | | beautifulsoup XML parsing: .string vs. .get_text()
See merge request webgroup/fatcat!40 | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | | The primary motivation for this change is that fatcat *requires* a
non-empty title for each release entity. Pubmed/Medline occasionally
indexes just a VenacularTitle with no ArticleTitle for foreign
publications, and currently those records don't end up in fatcat at all. | 
| | | |  | 
| | | 
| | 
| | 
| | | See previous pubmed commit for details. | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | 
| | 
| | 
| | 
| | | Yikes! Apparently when a tag has child tags, .string will return None
instead of all the strings. .get_text() returns all of it:
  https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text
  https://www.crummy.com/software/BeautifulSoup/bs4/doc/#string
I've things like identifiers as .string, when we expect only a single
string inside. | 
| | | 
| | 
| | 
| | 
| | 
| | | This goes against what the API docs recommend, but we are currently far
behind on updates and need to catch up. Other than what the docs say,
this seems to be consistent with the behavior we want. | 
| |/ |  | 
| | 
| 
| 
| 
| 
| 
| | These are journal/publisher patterns which we suspect to actually be OA
based on the large quantity of papers that crawl successfully. The
better long-term solution will be to flag containers in some way as OA
(or "should crawl"), but this is a good short-term solution. | 
| |\  
| | 
| | | Correct spelling mistakes | 
| | | |  | 
| |\ \  
| | | 
| | | 
| | | 
| | | | improve citeproc/CSL web interface
See merge request webgroup/fatcat!36 | 
| | | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | | This tries to show the citeproc (bibtext, MLA, CSL-JSON) options for
more releases, and not show the links when they would break.
The primary motivation here is to work around two exceptions being
thrown in prod every day (according to sentry):
  KeyError: 'role'
  ValueError: CLS requries some surname (family name)
I'm guessing these are mostly coming from crawlers following the
citeproc links on release landing pages. | 
| |/ /  
| |   
| |   
| |   
| |   
| |   
| |   
| |   
| | | Works around a bug in production:
  AttributeError: 'NoneType' object has no attribute 'replace'
  (datacite.py:724)
NOTE: there are no tests for this code path | 
| | | |  | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | 
| | 
| | | Example of entities with bogus years:
https://fatcat.wiki/release/search?q=doi_registrar%3Adatacite+year%3A%3E2100
We can do a clean-up task, but first need to prevent creation of new bad
metadata. | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | | This resolves a situation noticed in prod where we were only
importing/updating a single reference per article.
Includes a regression test. | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | | In particular, with daily updates in most cases the DOI will be
registered first, then the entity updated with PMID when that is
available. Often the pubmed metadata will be more complete, with
abstracts etc, and we'll want those improvements. | 
| | | 
| | 
| | 
| | 
| | 
| | | It seems like OUP pre-registers DOIs with this place-holder title, then
updates the Crossref metdata when the paper is actually published. We
should wait until the real title is available before creating an entity. | 
| | | |  | 
| |\ \  
| |/  
|/|   
| |   
| | | pubmed and arxiv harvest preparations
See merge request webgroup/fatcat!28 | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | 
| | | Address kafka tradeoff between long and short time-outs. Shorter
time-outs would facilitate
> consumer group re-balances and other consumer group state changes
[...] in a reasonable human time-frame. | 
| | | |  | 
| | | 
| | 
| | 
| | 
| | | * fetch_date will fail on missing mapping
* adjust tests (test will require access to pubmed ftp) | 
| | | |  | 
| | | 
| | 
| | 
| | 
| | | > Each day, NLM produces update files that include new, revised and
deleted citations. -- ftp://ftp.ncbi.nlm.nih.gov/pubmed/updatefiles/README.txt | 
| | | |  | 
| | | |  | 
| | | 
| | 
| | 
| | 
| | | * regenerate map in continuous mode
* add tests | 
| | | |  | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | | * add PubmedFTPWorker
* utils are currently stored alongside pubmed (e.g. ftpretr, xmlstream)
but may live elsewhere, as they are more generic
* add KafkaBs4XmlPusher | 
| |\ \ |  | 
| | | | 
| | | 
| | | 
| | | 
| | | 
| | | | Particularly, the ezb=green match seems mostly incorrect.
Note that pmcid being assigned could still be in an embargo window? | 
| | | | |  | 
| | | | |  | 
| | | | |  | 
| | | | |  |