| Commit message (Collapse) | Author | Age | Files | Lines | 
| ... |  | 
| |  | 
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
|  | 
seemingly from zenodo:
* https://fatcat.wiki/release/rzcpjwukobd4pj36ipla22cnoi
* https://doi.org/10.5281/zenodo.4041777
About 3400 records with "FULL MOVIE" in title, currently.
 | 
| |  | 
 | 
| | 
| 
| 
| 
|  | 
Includes a tiny tweak to the datacite import sample file to test this
code path.
 | 
| |  | 
 | 
| |  | 
 | 
| |  | 
 | 
| |  | 
 | 
| |  | 
 | 
| |  | 
 | 
| | 
| 
| 
|  | 
This is a small bugfix for a production issue.
 | 
| |\  
| | 
| | 
| | 
| |  | 
ingest behavior changes; some datacite metadata tweaks
See merge request webgroup/fatcat!78
 | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | 
| | 
| |  | 
In addition to changing the OA default, this was the main intended
behavior change in this group of commits: want to ingest fewer attempts
that we *expect* to fail, but default to ingest/crawl attempt if we are
uncertain. This is because there is a long tail of journals that
register DOIs and are defacto OA (fulltext is available), but we don't
have metadata indicating them as such.
 | 
| | |  | 
 | 
| | |  | 
 | 
| | |  | 
 | 
| | |  | 
 | 
| | | 
| | 
| | 
| |  | 
Also tweak title/publisher detection to use DOI prefixes
 | 
| | |  | 
 | 
| |/  
|   
|   
|    | 
As a first step: log response body for debugging.
 | 
| | 
| 
| 
| 
| 
| 
| 
|  | 
pass-through publisher_type from container extra metadata (ES field
already existed; this is from newer chocula metadata)
count arxiv and PMCID papers which haven't been crawled (by IA) as
"dark", not "bright"
 | 
| |  | 
 | 
| |  | 
 | 
| | 
| 
| 
|  | 
chocula 'export-fatcat' uses 'ident', not 'fatcat_ident'
 | 
| |  | 
 | 
| |\  
| | 
| | 
| | 
| |  | 
more lint fixes
See merge request webgroup/fatcat!69
 | 
| | | 
| | 
| | 
| | 
| | 
| | 
| | 
| | 
| | 
| |  | 
Oh no!
This bug may actually have had significant negative impact on metadata
in fatcat, in terms of missing container_id associations with pubmed
entities. There are about 500k release entities with a PMID but no
container_id. Of those, 89k have at least a container_name. Unclear how
many would have matched to ISSN-L and thus to a container.
 | 
| | | 
| | 
| | 
| |  | 
We are python3.7 now, so this isn't needed.
 | 
| | | 
| | 
| | 
| | 
| | 
| |  | 
These should not have any behavior changes, though a number of exception
catches are now more general, and there may be long-tail exceptions
getting thrown in these statements.
 | 
| | | 
| | 
| | 
| |  | 
Thanks @martin
 | 
| |/  
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|   
|    | 
Frequently when looking at preservation coverage of journals, the
current year shows as "un-preserved" when in fact there is robust KBART
(keepers, eg CLOCKSS/Portico) coverage. This is partially because we
don't update containers with KBART year spans very frequently (which is
on us), and partially because KBART reports are often a bit out of day
(eg, doesn't show coverage for the current year. For that matter, they
probably take a few months to update the previous year as well, but that
is a larger time span to fudge over.
This patch means we will count Portico/LOCKSS/etc coverage for "last
year" to count as coverage of publications dated "this year". Note that
for this to be effective/correct, it is assumed that we will update
containers with coverage year spans at least once a year, and that we
will re-index all releases at least once a year.
 | 
| |\  
| | 
| | 
| | 
| |  | 
datacite: address duplicated contributor issue
See merge request webgroup/fatcat!65
 | 
| | |\   | 
 | 
| | | |  | 
 | 
| | | |  | 
 | 
| | | |  | 
 | 
| | | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | |  | 
Use string comparison.
* https://fatcat.wiki/release/spjysmrnsrgyzgq6ise5o44rlu/contribs
* https://api.datacite.org/dois/10.25940/roper-31098406
 | 
| |\ \ \  
| |_|/  
|/| |   
| | |   
| | |    | 
datacite: mitigate sentry #44035
See merge request webgroup/fatcat!66
 | 
| | | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | | 
| | |  | 
According to sentry, running `c.get('nameIdentifiers', []) or []` on a c with value:
```
{'affiliation': [],
 'familyName': 'Guidon',
 'givenName': 'Manuel',
 'nameIdentifiers': {'nameIdentifier': 'https://orcid.org/0000-0003-3543-6683',
                     'nameIdentifierScheme': 'ORCID',
                     'schemeUri': 'https://orcid.org'},
 'nameType': 'Personal'}
```
results in a string, which I cannot reproduce. The document in question at:
https://api.datacite.org/dois/10.26275/kuw1-fdls seems fine, too.
 | 
| |\ \ \  
| |/ /  
|/| |   
| | |   
| | |    | 
arxiv: address 503, "Retry after specified interval" error
See merge request webgroup/fatcat!64
 | 
| | | |  | 
 | 
| |/ /  
| |   
| |   
| |    | 
refs: #44035
 | 
| | |  | 
 | 
| |/   | 
 | 
| |  | 
 | 
| | 
| 
| 
|  | 
via "missed potential license", refs #58
 | 
| |  | 
 | 
| |  | 
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
|  | 
This will increase index size (URLs are often long in our corpus, and we
have many file entities), but seems worth it.
Initially added `ia_url` as a second field, guaranteed to always be an
*.archive.org URL, but `best_url` defaults to that anyways so didn't
seem worthwhile.
 | 
| | 
| 
| 
| 
| 
| 
| 
| 
|  | 
In the past harvest of datacite resulted in occasional HTTP 400.
Meanwhile, various API bugs have been fixed (most recently:
https://github.com/datacite/lupo/pull/537,
https://github.com/datacite/datacite/issues/1038). Downside of ignoring
this error was that state lives in kafka, which has limited support for
deletion of arbitrary messages from a topic.
 |