summaryrefslogtreecommitdiffstats
path: root/python/fatcat_tools
Commit message (Collapse)AuthorAgeFilesLines
* pubmed: reconnect on errorMartin Czygan2021-07-161-4/+30
| | | | | | | | | ftp retrieval would run but fail with EOFError on /pubmed/updatefiles/pubmed21n1328_stats.html - not able to find the root cause; using a fresh client, the exact same file would work just fine. So when we retry, we reconnect on failure. Refs: sentry #91102.
* more consistent and defensive lower-casing of DOIsBryan Newbold2021-06-233-3/+8
| | | | | | | After noticing more upper/lower ambiguity in production. In particular, we have some old ingest requests in sandcrawler DB, which get re-submitted/re-tried, which have capitalized DOIs in the link source id field.
* datacite: more careful title string access; fixes sentry #88350Martin Czygan2021-06-111-1/+1
| | | | | Caused by a partial "title entry without title" coming *first* (e.g. just holding, e.g. a language, like: {'lang': 'da'}
* clean_doi() should lower-case returned DOIBryan Newbold2021-06-071-1/+4
| | | | | | | | | | Code in a number of places (including Pubmed importer) assumed that this was already lower-casing DOIs, resulting in some broken metadata getting created. See also: https://github.com/internetarchive/fatcat/issues/83 This is just the first step of mitigation.
* ingest: swap ingest and file checks, to result in clearer stats/counts of ↵Bryan Newbold2021-06-031-2/+2
| | | | skipping
* ingest: don't accept mag and s2 URLsBryan Newbold2021-06-031-4/+4
|
* changelog worker: fix file/fileset typo, caught by lintBryan Newbold2021-05-251-1/+1
| | | | | This would have been resulting in some releases not getting re-indexed into search.
* small python lint fixes (no behavior change)Bryan Newbold2021-05-253-4/+2
|
* ingest: add per-container ingest type overridesBryan Newbold2021-05-211-1/+17
|
* arabesque importer: ensure full 14-digit timestampsBryan Newbold2021-05-211-1/+3
|
* transforms: fix 'display_ame' typoBryan Newbold2021-04-191-2/+2
|
* prefer contrib.creator.display_name over contrib.raw_nameBryan Newbold2021-04-122-4/+7
| | | | | | | | These will be getting updates from ORCID and are usually more complete and more correct for display, attribution, and search purposes. Might need to tweak fuzzycat code to handle multiple names at the verification stage.
* es worker: ensure kafka messages get clearedBryan Newbold2021-04-121-0/+2
|
* es indexing: more 'wip' fixesBryan Newbold2021-04-121-1/+5
|
* ES indexing: skip 'wip' entities with a warningBryan Newbold2021-04-121-11/+16
|
* container ES index worker: support for querying statusBryan Newbold2021-04-061-5/+32
|
* ES schema updates: doc_index_ts as a str, not datetimeBryan Newbold2021-04-061-4/+4
| | | | | The schema is a timestamp, but python needs to serialize as JSON, and doesn't do datetime automatically.
* container search schema: preservation stats, new fieldsBryan Newbold2021-04-061-2/+18
| | | | Includes transform code updates and partial test coverage.
* release ES: add discipline fieldBryan Newbold2021-04-061-0/+2
|
* ES schemas: add doc_index_ts to all mappingsBryan Newbold2021-04-061-0/+4
|
* indexing: don't use document namesBryan Newbold2021-04-061-14/+4
|
* datacite: a missing surname should be None, not the empty stringMartin Czygan2021-04-021-2/+1
| | | | refs sentry #77700
* elasticsearch: simple new dblp and doaj fieldsBryan Newbold2021-01-201-0/+4
|
* web ingest: terminal URL mismatch as skip, not assertBryan Newbold2020-12-301-1/+3
|
* dblp release import: skip arxiv_id releasesBryan Newbold2020-12-241-0/+9
|
* normalizer: test for un-versioned arxiv_idBryan Newbold2020-12-241-0/+4
|
* dblp import: fix arxiv_id typoBryan Newbold2020-12-231-1/+1
| | | | Would have been caught by mypy!
* ingest: allow dblp importsBryan Newbold2020-12-231-1/+1
|
* fuzzy: set 120 second timeout on ES lookupsBryan Newbold2020-12-231-1/+1
|
* dblp: polish HTML scrape/extract pipelineBryan Newbold2020-12-171-0/+14
|
* dblp: flesh out update code path (especially to add container_id linkage)Bryan Newbold2020-12-171-2/+6
|
* dblp: run fuzzy matching at try_update time (same as DOAJ)Bryan Newbold2020-12-171-1/+8
|
* improve dblp release importBryan Newbold2020-12-171-1/+2
|
* very simple dblp container importerBryan Newbold2020-12-172-0/+145
|
* dblp release importer: container_id lookup TSV, and dump JSON modeBryan Newbold2020-12-171-10/+66
|
* wikidata QID normalize helperBryan Newbold2020-12-171-2/+24
|
* initial implementation of dblp release importer (in progress)Bryan Newbold2020-12-172-0/+445
|
* add 'lxml' mode for large XML file import, and multi-tagsBryan Newbold2020-12-171-15/+28
|
* add dblp as an ingest source and identifierBryan Newbold2020-12-171-1/+2
|
* ingest: allow doaj ingest responsesBryan Newbold2020-12-171-1/+2
|
* bug fix: is_preserved should always be boolBryan Newbold2020-12-171-2/+2
|
* Merge branch 'bnewbold-doaj-fuzzy' into 'master'bnewbold2020-12-182-4/+71
|\ | | | | | | | | DOAJ import fuzzy match filter See merge request webgroup/fatcat!92
| * update fuzzy helper to pass 'reason' through to import codeBryan Newbold2020-12-171-3/+3
| | | | | | | | | | The motivation for this change is to enable passing the 'reason' through to edit extra metadata, in cases where we merge or cluster releases.
| * add fuzzy match filtering to DOAJ importerBryan Newbold2020-12-161-2/+9
| | | | | | | | | | | | | | | | | | | | | | In this default configuration, any entities with a fuzzy match (even "ambiguous") will be skipped at import time, to prevent creating duplicates. This is conservative towards not creating new/duplicate entities. In the future, as we get more confidence in fuzzy match/verification, we can start to ignore AMBIGUOUS, handle EXACT as same release, and merge STRONG (and WEAK?) matches under the same work entity.
| * add fuzzy matching helper to importer base classBryan Newbold2020-12-161-2/+62
| | | | | | | | Using fuzzycat. Add basic test coverage.
* | entity update worker: treat fileset and webcapture updates like file updatesBryan Newbold2020-12-161-3/+25
| | | | | | | | | | | | | | | | | | When webcapture or fileset entities are updated, then the release entities associated with them also need to be updated (and work entities, recursively). A TODO is to handle the case where a release_id is *removed* as well as *added*, and reprocess the releases in that case as well.
* | fix indentationBryan Newbold2020-12-161-2/+2
| |
* | have release elasticsearch transform count webcaptures and filesets towards ↵Bryan Newbold2020-12-161-26/+57
| | | | | | | | | | | | | | | | | | | | | | | | | | preservation These are simple/partial changes to have webcaptures and filesets show up in 'preservation', 'in_ia', and 'in_web' ES schema flags. A longer-term TODO is to update the ES schema to have more granular analytic flags. Also includes a small generalization refactor for URL object parsing into preservation status, shared across file+fileset+webcapture entity types (all have similar URL objects with url+rel fields).
* | small release_to_elasticsearch refactorsBryan Newbold2020-12-161-7/+12
| | | | | | | | | | | | | | These should have almost no change in behavior, but improve code quality. The one behavior change is counting ftp URLs as "in_web"
* | refactor release_to_elasticsearch transformBryan Newbold2020-12-161-131/+148
|/ | | | | | | | | | | | This method was huge an monolithic. This commit splits out the content and container specific sections into helper functions to make it more managable. This involved refactoring to make many flags ("is_*" and "in_*") part of the output dict through the entire code path, allowing simple update() calls on the dict. Noting that in the future should refactor to use a type-annotated class for the elasticsearch output object. Perhaps something auto-generated from the ES schema itself (JSON files).