| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
| |
|
|
|
|
|
| |
We had some pre-3.6 work arounds. Also seems like a reasonable time to
update all depdencies to most recent versions.
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
| |
Used to create bnewbold/fatcat-test-base image
|
|
|
|
| |
Goal is to speed up CI runs.
|
|
|
|
| |
Not sure why things build without this.
|
|
|
|
|
|
| |
Required updating to newer 'buster' Debian distro, and a newer rust
release to work around a Docker/OCI containerization issue with older
docker images.
|
| |
|
|
|
|
| |
Also updates dependencies.
|
| |
|
|
|
|
|
| |
- don't do expanded and regular release dumps
- default to sqldump_public for item name (as that is common-case)
|
|\
| |
| |
| |
| | |
beautifulsoup XML parsing: .string vs. .get_text()
See merge request webgroup/fatcat!40
|
| |
| |
| |
| |
| |
| |
| | |
The primary motivation for this change is that fatcat *requires* a
non-empty title for each release entity. Pubmed/Medline occasionally
indexes just a VenacularTitle with no ArticleTitle for foreign
publications, and currently those records don't end up in fatcat at all.
|
| | |
|
| |
| |
| |
| | |
See previous pubmed commit for details.
|
| |
| |
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Yikes! Apparently when a tag has child tags, .string will return None
instead of all the strings. .get_text() returns all of it:
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#get-text
https://www.crummy.com/software/BeautifulSoup/bs4/doc/#string
I've things like identifiers as .string, when we expect only a single
string inside.
|
|\ \
| | |
| | |
| | |
| | | |
proposal: fuzzy matching
See merge request webgroup/fatcat!39
|
|/ / |
|
| | |
|
|\ \
| | |
| | |
| | |
| | | |
change crossref harvest date field
See merge request webgroup/fatcat!41
|
| | |
| | |
| | |
| | |
| | |
| | | |
This goes against what the API docs recommend, but we are currently far
behind on updates and need to catch up. Other than what the docs say,
this seems to be consistent with the behavior we want.
|
|/ / |
|
| | |
|
|\ \
| |/
|/| |
|
| |
| |
| |
| | |
Thanks to Martin for suggestion
|
| | |
|
| |
| |
| |
| |
| | |
The release view will display subtitles, but it needs to be in the
correct "location".
|
| |
| |
| |
| |
| |
| |
| | |
These are journal/publisher patterns which we suspect to actually be OA
based on the large quantity of papers that crawl successfully. The
better long-term solution will be to flag containers in some way as OA
(or "should crawl"), but this is a good short-term solution.
|
| |
| |
| |
| |
| |
| |
| | |
So far only updating "what was contributed" for past work, not recent or
(potentially) ongoing contributions.
Thank you everybody!
|
|\ \
| | |
| | | |
Correct spelling mistakes
|
| | | |
|
|\ \ \
| | | |
| | | |
| | | |
| | | | |
catch ApiValueError in some generic API calls
See merge request webgroup/fatcat!35
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
The motivation for this change is to handle bogus revision IDs in URLs,
which were causing 500 errors not 400 errors. Eg:
https://qa.fatcat.wiki/file/rev/5d5d5162-b676-4f0a-968f-e19dadeaf96e%2B2019-11-27%2B13:49:51%2B0%2B6
I have no idea where these URLs are actually coming from, but they
should be 4xx not 5xx.
Investigating made me realize there is a whole category of ApiValueError
exceptions we were not catching and should have been.
|
|\ \ \ \
| |_|_|/
|/| | |
| | | |
| | | | |
improve citeproc/CSL web interface
See merge request webgroup/fatcat!36
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
This tries to show the citeproc (bibtext, MLA, CSL-JSON) options for
more releases, and not show the links when they would break.
The primary motivation here is to work around two exceptions being
thrown in prod every day (according to sentry):
KeyError: 'role'
ValueError: CLS requries some surname (family name)
I'm guessing these are mostly coming from crawlers following the
citeproc links on release landing pages.
|
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | |
| | | | |
Works around a bug in production:
AttributeError: 'NoneType' object has no attribute 'replace'
(datacite.py:724)
NOTE: there are no tests for this code path
|
| |/ /
|/| | |
|
|\ \ \
| |/ /
|/| |
| | |
| | | |
notes: pubmed backfill (03/2020)
See merge request webgroup/fatcat!34
|
|/ / |
|
| | |
|
| | |
|
|\ \
| | |
| | |
| | |
| | | |
datacite: add year sanity restrictions
See merge request webgroup/fatcat!33
|
|/ /
| |
| |
| |
| |
| |
| |
| |
| | |
Example of entities with bogus years:
https://fatcat.wiki/release/search?q=doi_registrar%3Adatacite+year%3A%3E2100
We can do a clean-up task, but first need to prevent creation of new bad
metadata.
|
| | |
|
| |
| |
| |
| |
| |
| |
| | |
This resolves a situation noticed in prod where we were only
importing/updating a single reference per article.
Includes a regression test.
|