summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* rust: bump crate version and lockfileBryan Newbold2021-11-172-3/+3
|
* rust: implement content_scopeBryan Newbold2021-11-175-0/+22
|
* SQL implementation of content_scopeBryan Newbold2021-11-172-0/+36
|
* codegen rust code for content_scopeBryan Newbold2021-11-173-4/+19
|
* schema: add content_scope fields, and bump to 0.4.1Bryan Newbold2021-11-171-1/+10
|
* proposal: content_scope fieldBryan Newbold2021-11-171-0/+84
|
* updated notes on possible cleanupsBryan Newbold2021-11-171-4/+27
|
* ISSN-L dupes check: output all matchesBryan Newbold2021-11-171-1/+1
|
* document cleanups run this weekBryan Newbold2021-11-125-0/+244
|
* web: handle ES non-int error codes betterBryan Newbold2021-11-121-9/+12
|
* Merge branch 'bnewbold-import-refactors' into 'master'bnewbold2021-11-1127-1599/+874
|\ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | import refactors and deprecations Some of these are from old stale branches (the datacite subject metadata patch), but most are from yesterday and today. Sort of a hodge-podge, but the general theme is getting around to deferred cleanups and refactors specific to importer code before making some behavioral changes. The Datacite-specific stuff could use review here. Remove unused/deprecated/dead code: - cdl_dash_dat and wayback_static importers, which were for specific early example entities and have been superseded by other importers - "extid map" sqlite3 feature from several importers, was only used for initial bulk imports (and maybe should not have been used) Refactors: - moved a number of large datastructures out of importer code and into a dedicated static file (`biblio_lookup_tables.py`). Didn't move all, just the ones that were either generic or very large (making it hard to read code) - shuffled around relative imports and some function names ("clean_str" vs. "clean") Some actual behavioral changes: - remove some Datacite-specific license slugs - stop trying to fix double-slashes in DOIs, that was causing more harm than help (some DOIs do actually have double-slashes!) - remove some excess metadata from datacite 'extra' fields
| * update datacite tests for license slug changesBryan Newbold2021-11-102-8/+7
| | | | | | | | | | Use datacite-specific wrapper function, and remove a couple non-OA/TDM-limited licenses.
| * improve lookup_license_slug helper and lookup tableBryan Newbold2021-11-102-56/+62
| |
| * refactor importer metadata tables into separate file; move some helpers aroundBryan Newbold2021-11-1010-702/+682
| | | | | | | | | | | | | | - MAX_ABSTRACT_LENGTH set in a single place (importer common) - merge datacite license slug table in to common table, removing some TDM-specific licenses (which do not apply in the context of preserving the full work)
| * importers: refactor imports of clean() and other normalization helpersBryan Newbold2021-11-1012-95/+104
| |
| * remove cdl_dash_dat and wayback_static importersBryan Newbold2021-11-104-596/+0
| | | | | | | | | | | | | | | | Cleaning out dead code. These importers were used to create demonstration fileset and webcapture entities early in development. They have been replaced by the fileset and webcapture ingest importers.
| * datacite import: store less subject metadataBryan Newbold2021-11-101-1/+7
| | | | | | | | | | | | | | | | Many of these 'subject' objects have the equivalent of several lines of text, with complex URLs that don't compress well. I think it is fine we have included these thus far instead of parsing more deeply, but going forward I don't think this nested 'extra' metadata is worth the database space.
| * add notes about 'double slash in DOI' issueBryan Newbold2021-11-091-0/+46
| |
| * importers: use clean_doi() in many more (all?) importersBryan Newbold2021-11-096-12/+29
| |
| * clean_doi: stop mutating double-slash DOIs, except for 10.1037 prefixBryan Newbold2021-11-091-1/+2
| |
| * remove deprecated extid sqlite3 lookup table feature from importersBryan Newbold2021-11-0910-203/+10
| | | | | | | | | | | | | | | | This was used during initial bulk imports, but is no longer used and could create serious metadata problems if used accidentially. In retrospect, it also made metadata provenance less transparent, and may have done more harm than good overall.
* | Merge branch 'bnewbold-cleanups-nov2021' into 'master'bnewbold2021-11-119-1/+1504
|\ \ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | Fatcat metadata cleanups/fixups, November 2021 Three cleanups implemented in this branch: - update non-lowercase DOIs on releases (couple hundred thousand entities) - fix incorrectly imported file/release pairs, on the file entity side (~250k entities) - expand truncated wayback URL timestamps in file entities (up to 10 million entities) Instead of proposals, there are documents for each cleanup in `notes/cleanups/`. Have done spot testing of tens of thousands of entities each in QA, and confident about running in production. Plan is to run updates in the order above. DOI and bugfix updates will go fairly fast; the wayback timestamp updates will go slower, and result in large re-indexing load both in fatcat and scholar, because both release and work entities will get triggered for update when file entities are updated.
| * | wayback ts cleanup: one more filter tweakBryan Newbold2021-11-091-1/+2
| | |
| * | update cleanups notesBryan Newbold2021-11-092-0/+72
| | |
| * | file/release bugfix: handle files with multiple editsBryan Newbold2021-11-091-6/+6
| | |
| * | cleanups: add more state=active checksBryan Newbold2021-11-092-0/+8
| | |
| * | update link source filters in file/release bugfixBryan Newbold2021-11-091-2/+8
| | |
| * | initial file/release bugfix cleanup worker and notesBryan Newbold2021-11-092-0/+375
| | |
| * | updates to lowercase DOI cleanupBryan Newbold2021-11-092-7/+86
| | |
| * | lowercase DOI lint and check entity statusBryan Newbold2021-11-091-4/+5
| | |
| * | more iteration on short wayback timestamp cleanupBryan Newbold2021-11-093-4/+129
| | |
| * | lint: minor import tweakBryan Newbold2021-11-091-1/+1
| | |
| * | cleanups: tweaks to wayback CDX cleanup scriptsBryan Newbold2021-11-092-6/+21
| | |
| * | cleanups: initial lowercase DOI cleanup scriptBryan Newbold2021-11-091-0/+145
| | |
| * | wayback short ts: another regression test, and some small fmt/tweaksBryan Newbold2021-11-091-3/+38
| | |
| * | wayback cleanup: actually update entityBryan Newbold2021-11-091-2/+4
| | |
| * | imports: generic file cleanup removes exact duplicate URLsBryan Newbold2021-11-091-0/+9
| | |
| * | wayback short ts: add regression test for dupe URLsBryan Newbold2021-11-091-0/+44
| | |
| * | short wayback ts: initial cleanup script implementationBryan Newbold2021-11-091-0/+251
| | |
| * | wayback timestamps: updates to handle 4-digit caseBryan Newbold2021-11-092-11/+108
| | |
| * | start work on wayback short-timestamp cleanupBryan Newbold2021-11-092-0/+238
| |/
* | update crawlability docsBryan Newbold2021-11-101-1/+9
| |
* | sitemap generation improvementsBryan Newbold2021-11-102-1/+2
| |
* | start notes/proposal about 'crawlability' improvementsBryan Newbold2021-11-101-0/+68
| |
* | pubmed: allow updates if PMCID does not exist yetBryan Newbold2021-11-101-1/+6
|/ | | | | | | | | | | The intent of this change is to start updating Pubmed metadata records when a PMCID has been assigned, but that ext_id hasn't been recorded in fatcat yet. It is likely that this change will result in some additional duplicate PMCIDs in the catalog. But the principle is that the PMID is the primary pubmed identifier, and all records with a PMID should have the PMCID that pubmed indicates, even if there exists another incorrect record.
* update CHANGELOG for recent developmentBryan Newbold2021-11-051-0/+26
|
* python tests: verify array sort orderBryan Newbold2021-11-054-20/+18
| | | | | | | In a couple cases (eg, filesets), had made tests agnostic to sort order, because the sort order was not stable. In other cases, simply small cleanups and comment improvements.
* api: add SQL 'ORDER BY' to many reads to stabilize API array orderingBryan Newbold2021-11-051-3/+14
| | | | | | | | | | | | | | The hope is to make things like file entity URLs, fileset manifests, and other arrays in the JSON API "stable", meaning that if you create an entity with a list of a given order, a read back (in any environment, including prod/QA, bulk dumps, etc) will return the array with the same sort order. This was informally happening most of the time, but occasionally not (!) Assumption is that these sorts will have little or no performance impact, as the common case is less than a dozen elements, and the hard cases are a few thousand at most, and there is already a sorted index.
* enable type annotation checking with flake8 by default ('make lint')Bryan Newbold2021-11-031-4/+2
|
* cleanups: create a separate JsonLinePusher for cleanup workers (distinct ↵Bryan Newbold2021-11-033-4/+20
| | | | base class)