aboutsummaryrefslogtreecommitdiffstats
path: root/python/fatcat_tools
Commit message (Collapse)AuthorAgeFilesLines
...
| * | datacite: mitigate sentry #44035Martin Czygan2020-07-101-0/+4
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | According to sentry, running `c.get('nameIdentifiers', []) or []` on a c with value: ``` {'affiliation': [], 'familyName': 'Guidon', 'givenName': 'Manuel', 'nameIdentifiers': {'nameIdentifier': 'https://orcid.org/0000-0003-3543-6683', 'nameIdentifierScheme': 'ORCID', 'schemeUri': 'https://orcid.org'}, 'nameType': 'Personal'} ``` results in a string, which I cannot reproduce. The document in question at: https://api.datacite.org/dois/10.26275/kuw1-fdls seems fine, too.
* | | Merge branch 'martin-arxiv-fix-http-503' into 'master'bnewbold2020-07-101-1/+1
|\ \ \ | |/ / |/| | | | | | | | arxiv: address 503, "Retry after specified interval" error See merge request webgroup/fatcat!64
| * | arxiv: do retry five times of HTTP 503Martin Czygan2020-07-101-1/+1
| | |
* | | datacite: fix attribute errorMartin Czygan2020-07-071-1/+1
|/ / | | | | | | refs: #44035
* | lint (flake8) tool python filesBryan Newbold2020-07-0133-130/+46
| |
* | reviewer: fix bugs in common code found by mypyBryan Newbold2020-07-011-2/+3
|/
* add new license mappingsBryan Newbold2020-06-302-0/+27
|
* datacite: improve license mappingMartin Czygan2020-06-301-9/+15
| | | | via "missed potential license", refs #58
* datacite: hard cast possible date value to stringMartin Czygan2020-06-291-1/+1
|
* disallow a specific unicode character from DOIsBryan Newbold2020-06-261-0/+6
|
* ES schema: add best_url to file schemaBryan Newbold2020-06-041-0/+12
| | | | | | | | | This will increase index size (URLs are often long in our corpus, and we have many file entities), but seems worth it. Initially added `ia_url` as a second field, guaranteed to always be an *.archive.org URL, but `best_url` defaults to that anyways so didn't seem worthwhile.
* harvest: fail on HTTP 400Martin Czygan2020-05-291-4/+0
| | | | | | | | | In the past harvest of datacite resulted in occasional HTTP 400. Meanwhile, various API bugs have been fixed (most recently: https://github.com/datacite/lupo/pull/537, https://github.com/datacite/datacite/issues/1038). Downside of ignoring this error was that state lives in kafka, which has limited support for deletion of arbitrary messages from a topic.
* Merge branch 'bnewbold-ingest-stage' into 'master'Martin Czygan2020-05-281-0/+5
|\ | | | | | | | | verify release_stage in ingest importer See merge request webgroup/fatcat!52
| * ingest importer: check that stage is consistent with releaseBryan Newbold2020-05-261-0/+5
| |
* | rename HarvestState.next() to HarvestState.next_span()Bryan Newbold2020-05-264-5/+5
|/ | | | | | | | | "span" short for "timespan" to harvest; there may be a better name to use. Motivation for this is to work around a pylint erorr that .next() was not callable. This might be a bug with pylint, but .next() is also a very generic name.
* HACK: skip pylint errors on lines that seem to be fineBryan Newbold2020-05-223-3/+3
| | | | | It seems to be an inadvertantly ugraded version of pylint saying that these lines are not-callable.
* Merge remote-tracking branch 'github/master'Bryan Newbold2020-05-221-2/+2
|\
| * Indentity is not the same this as equality in PythonChristian Clauss2020-05-141-2/+2
| |
* | importers: clarify handling of ApiExceptionBryan Newbold2020-05-223-4/+10
| | | | | | | | | | | | | | | | One of these (in ingest importer pipeline) is an actual bug, the others are just changing the syntax to be more explicit/conservative. The ingest importer bug seems to have resulted in some bad file match imports; scale of impact is unknown.
* | ingest importer: don't use glutton matchesBryan Newbold2020-05-221-3/+3
| | | | | | | | | | | | | | Until reviewing I didn't realize we were even doing this currently. Hopefluly has not impacted too many imports, as almost all ingests use an external identifer, so only those with identifers not in fatcat for whatever reason.
* | datacite: fix type errorMartin Czygan2020-04-221-1/+3
| | | | | | | | | | | | | | Up to now, we expected the description to be a string or list. Add handling for int as well. First appeared: Apr 22 19:58:39.
* | Merge branch 'martin-datacite-fix-release-contrib-raw-name-check-violation' ↵bnewbold2020-04-201-0/+8
|\ \ | | | | | | | | | | | | | | | | | | into 'master' datacite: fix a raw name constraint violation See merge request webgroup/fatcat!47
| * | datacite: fix a raw name constraint violationMartin Czygan2020-04-201-0/+8
| |/ | | | | | | | | | | | | It was possible that contribs got added which had no raw name. One example would be a name consisting of whitespace only. This fix adds a final check for this case.
* | more changelog ES fixesBryan Newbold2020-04-171-4/+6
| |
* | ES changelog worker: fixes for ident; fetch update from API if neededBryan Newbold2020-04-171-2/+9
|/ | | | | The API fetch update may be needed for old changelog entries in the kafka feed.
* Merge branch 'bnewbold-py37-cleanups' into 'master'bnewbold2020-04-172-6/+6
|\ | | | | | | | | py37 cleanups See merge request webgroup/fatcat!44
| * consistently use raw string prefix for regexBryan Newbold2020-04-172-6/+6
| |
* | Merge branch 'martin-changelog-to-es' into 'master'bnewbold2020-04-172-2/+23
|\ \ | | | | | | | | | | | | derive changelog worker from release worker See merge request webgroup/fatcat!43
| * | derive changelog worker from release workerMartin Czygan2020-04-172-2/+23
| |/ | | | | | | | | Early versions of changelog entries may not have all the fields required for the current transform.
* | changelog: limit typesMartin Czygan2020-04-161-5/+1
| | | | | | | | | | No partial docs (e.g. abstract), too generic components and entries, not HTML blogs.
* | changelog: extend release_types considered documentsMartin Czygan2020-04-161-10/+19
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | according to release_rev.release_type, we have 29 values: fatcat_prod=# select release_type, count(release_type) from release_rev group by release_type; release_type | count -------------------+----------- abstract | 2264 article | 6371076 article-journal | 101083841 article-newspaper | 17062 book | 1676941 chapter | 13914854 component | 58990 dataset | 6860325 editorial | 133573 entry | 1628487 graphic | 1809471 interview | 19898 legal_case | 3581 legislation | 1626 letter | 275119 paper-conference | 6074669 peer_review | 30581 post | 245807 post-weblog | 135 report | 1010699 retraction | 1292 review-book | 96219 software | 316 song | 24027 speech | 4263 standard | 312364 stub | 1036813 thesis | 414397 | 0 (29 rows)
* Merge branch 'bnewbold-pubmed-get_text' into 'master'bnewbold2020-04-014-39/+47
|\ | | | | | | | | beautifulsoup XML parsing: .string vs. .get_text() See merge request webgroup/fatcat!40
| * pubmed: use untranslated title if translated not availableBryan Newbold2020-04-011-0/+6
| | | | | | | | | | | | | | The primary motivation for this change is that fatcat *requires* a non-empty title for each release entity. Pubmed/Medline occasionally indexes just a VenacularTitle with no ArticleTitle for foreign publications, and currently those records don't end up in fatcat at all.
| * importers: replace newlines in get_text() stringsBryan Newbold2020-04-014-23/+25
| |
| * importers: more string/get_text swapsBryan Newbold2020-03-283-27/+27
| | | | | | | | See previous pubmed commit for details.
| * pubmed: bunch of .get_text() instead of .stringBryan Newbold2020-03-281-12/+12
| | | | | | | | | | | | | | | | | | | | | | Yikes! Apparently when a tag has child tags, .string will return None instead of all the strings. .get_text() returns all of it: https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text https://www.crummy.com/software/BeautifulSoup/bs4/doc/#string I've things like identifiers as .string, when we expect only a single string inside.
* | crossref: switch from index-date to update-dateBryan Newbold2020-03-301-1/+1
| | | | | | | | | | | | This goes against what the API docs recommend, but we are currently far behind on updates and need to catch up. Other than what the docs say, this seems to be consistent with the behavior we want.
* | crossref: longer comment about crossref API date fieldsBryan Newbold2020-03-301-2/+22
|/
* ingest: more DOI patterns to treat as OABryan Newbold2020-03-281-0/+26
| | | | | | | These are journal/publisher patterns which we suspect to actually be OA based on the large quantity of papers that crawl successfully. The better long-term solution will be to flag containers in some way as OA (or "should crawl"), but this is a good short-term solution.
* Merge pull request #53 from EdwardBetts/spellingbnewbold2020-03-274-9/+9
|\ | | | | Correct spelling mistakes
| * Correct spelling mistakesEdward Betts2020-03-274-9/+9
| |
* | Merge branch 'bnewbold-citeproc-fixes' into 'master'bnewbold2020-03-261-6/+12
|\ \ | | | | | | | | | | | | improve citeproc/CSL web interface See merge request webgroup/fatcat!36
| * | improve citeproc/CSL web interfaceBryan Newbold2020-03-251-6/+12
| | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | This tries to show the citeproc (bibtext, MLA, CSL-JSON) options for more releases, and not show the links when they would break. The primary motivation here is to work around two exceptions being thrown in prod every day (according to sentry): KeyError: 'role' ValueError: CLS requries some surname (family name) I'm guessing these are mostly coming from crawlers following the citeproc links on release landing pages.
* | | datacite: nameIdentifier corner caseBryan Newbold2020-03-261-1/+2
|/ / | | | | | | | | | | | | | | | | Works around a bug in production: AttributeError: 'NoneType' object has no attribute 'replace' (datacite.py:724) NOTE: there are no tests for this code path
* | jalc: avoid meaningless pages valuesBryan Newbold2020-03-231-4/+8
| |
* | datacite: add year sanity restrictionsbnewbold2020-03-231-0/+7
| | | | | | | | | | | | | | | | | | Example of entities with bogus years: https://fatcat.wiki/release/search?q=doi_registrar%3Adatacite+year%3A%3E2100 We can do a clean-up task, but first need to prevent creation of new bad metadata.
* | pubmed: handle multiple ReferenceListBryan Newbold2020-03-201-1/+4
| | | | | | | | | | | | | | This resolves a situation noticed in prod where we were only importing/updating a single reference per article. Includes a regression test.
* | pubmed: update many more metadata fieldsBryan Newbold2020-03-191-0/+22
| | | | | | | | | | | | | | In particular, with daily updates in most cases the DOI will be registered first, then the entity updated with PMID when that is available. Often the pubmed metadata will be more complete, with abstracts etc, and we'll want those improvements.
* | crossref: skip stub OUP titleBryan Newbold2020-03-191-0/+8
| | | | | | | | | | | | It seems like OUP pre-registers DOIs with this place-holder title, then updates the Crossref metdata when the paper is actually published. We should wait until the real title is available before creating an entity.
* | ingest: always try some lancet journalsBryan Newbold2020-03-191-0/+3
| |