summaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
* vastly improve entity_to_dict() speedBryan Newbold2019-01-281-1/+9
|
* add filesets and webcaptures to dumpsBryan Newbold2019-01-285-2/+35
|
* fatcat -> fatcat_release ES indexBryan Newbold2019-01-283-20/+21
|
* transform and import fixes/tweaksBryan Newbold2019-01-258-30/+214
|
* improved journal metadata mungerBryan Newbold2019-01-252-100/+325
|
* tweak elastic schemas (again)Bryan Newbold2019-01-252-6/+4
|
* update journal meta import/transformBryan Newbold2019-01-256-154/+226
|
* grobid import extra metadata tweaksBryan Newbold2019-01-241-6/+7
|
* refactor _get_editgroup => get_editgroup_idBryan Newbold2019-01-242-5/+6
|
* refactor make_rel_urlBryan Newbold2019-01-243-29/+66
|
* examples of works with many authors (eg, used in tests)Bryan Newbold2019-01-241-0/+6
|
* tweak crossref import, and update testsBryan Newbold2019-01-245-32/+89
|
* empty fields testBryan Newbold2019-01-241-0/+13
|
* allow importing contrib/refs listsBryan Newbold2019-01-244-13/+50
| | | | | | The motivation here isn't really to support these gigantic lists on principle, but to be able to ingest large corpuses without having to decide whether to filter out or crop such lists.
* codegen schema tweaksBryan Newbold2019-01-245-20/+61
|
* Merge branch 'schema-tweaks'Bryan Newbold2019-01-241-16/+8
|\
| * more IDENT types in API schemaBryan Newbold2019-01-141-16/+8
| |
* | more 2019-01-16 import timingBryan Newbold2019-01-241-0/+70
| |
* | notes on refactoring container 'extra'Bryan Newbold2019-01-241-0/+79
| |
* | first-pass journal metadata mungerBryan Newbold2019-01-245-0/+512
| |
* | importer bugfixesBryan Newbold2019-01-233-8/+14
| |
* | more import script fixesBryan Newbold2019-01-231-1/+4
| |
* | initial changelog and container ES schemasBryan Newbold2019-01-232-0/+113
| |
* | start changes to release ES schemaBryan Newbold2019-01-235-141/+234
| |
* | bunch of crossref import tweaks (need tests)Bryan Newbold2019-01-231-50/+43
| |
* | try to fix any_abstractBryan Newbold2019-01-231-1/+1
| |
* | clean() checks if it returns null-length stringBryan Newbold2019-01-231-1/+5
| |
* | ensure no zero-length strings in SQL schemaBryan Newbold2019-01-231-43/+43
| |
* | update importer scriptBryan Newbold2019-01-231-33/+24
| |
* | matched importer: bezerk mode to skip file updatesBryan Newbold2019-01-231-11/+5
| |
* | ensure crossref importer doesn't create empty editgroupsBryan Newbold2019-01-231-0/+2
| |
* | ftfy all over (needs Pipfile.lock)Bryan Newbold2019-01-238-39/+75
| |
* | add missing dateBryan Newbold2019-01-231-1/+1
| |
* | more tests; fix some importer behaviorBryan Newbold2019-01-237-50/+111
| |
* | specific test for desc/extra in editgroupsBryan Newbold2019-01-231-2/+26
| |
* | improve changelog testsBryan Newbold2019-01-236-12/+15
| |
* | refactor remaining importersBryan Newbold2019-01-2213-356/+324
| |
* | allow passing description+extra to batch endpointsBryan Newbold2019-01-2214-143/+638
| | | | | | | | | | | | Pretty messy, but I needed some way to do this. In particular, requires json.dumps() in python code, for now. Blech.
* | refactored crossref importer to new styleBryan Newbold2019-01-225-118/+198
| |
* | new importer API interfacesBryan Newbold2019-01-222-0/+181
| |
* | crossref importer updatesBryan Newbold2019-01-224-22/+82
| |
* | add helper/hack script to generate botsBryan Newbold2019-01-221-0/+25
| |
* | pubmed+datacite tokens; no journal,grobid,matched tokensBryan Newbold2019-01-222-5/+4
| |
* | fix issn -> journal-metadata renameBryan Newbold2019-01-221-1/+1
| |
* | MAG schema notesBryan Newbold2019-01-221-0/+65
| |
* | 2019-01-16 QA import timing notesBryan Newbold2019-01-221-0/+422
| |
* | more per-entity testsBryan Newbold2019-01-227-58/+312
| |
* | add missing arxiv+jstor id indicesBryan Newbold2019-01-221-0/+2
| |
* | allow arxiv and jstor lookupsBryan Newbold2019-01-2112-13/+106
| |
* | remove coden and abbrev from python toolsBryan Newbold2019-01-212-8/+0
| |