| Commit message (Collapse) | Author | Age | Files | Lines |
|
|
|
|
| |
* regenerate map in continuous mode
* add tests
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Records from https://www.micropublication.org/ did not have a date in
FC, although raw data contained date strings - they were not using the
finer-grained "attributes.date" but "attributes.published" and/or
"attributes.publicationYear".
Support for those fields has been added, including a test case.
During this test (#30) a processing gap for names became clear (author
may have "given_name" and "surname", but no "name"). This bug has been
fixed, too.
|
|
|
|
|
|
|
|
|
| |
Technically, [...] DOI names may incorporate any printable characters
from the Universal Character Set (UCS-2), of ISO/IEC 10646, which is the
character set defined by Unicode (https://www.doi.org/doi_handbook/2_Numbering.html#2.5.1).
For mostly QA reasons, we currently treat a DOI with an "en dash" as
invalid.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
Use values from:
* attributes.creators[]
* attributes.contributors[]
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* attributes.metadataVersion
* attributes.schemaVersion
* attributes.version (source dependent values, follows suggestions in
https://schema.datacite.org/meta/kernel-4.3/doc/DataCite-MetadataKernel_v4.3.pdf#page=26,
but values vary)
Furthermore:
* attributes.types.resourceTypeGeneral
* attributes.types.resourceType
|
| |
|
|
|
|
|
| |
> include release_month as a top-level extra field [...] to
auto-populate the schema field from that
|
|
|
|
|
|
|
|
| |
Datacite defines placeholders for unknown values:
* https://support.datacite.org/docs/schema-values-unknown-information-v43
Clean abstracts.
|
|
|
|
|
|
| |
> always include extra values for the respective DOI registrars
(datacite, crossref, jalc), even if they are empty ({}), to be used as a
flag so we know which DOI registrar supplied the metadata.
|
|
|
|
| |
Estimated time for a single call is in the order of 50ms.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
| |
> The convention for display_name and raw_name is to be how the name
would normally be printed, not in index form (surname comma given_name).
So we might need to un-encode names like "Tricart, Pierre".
Use an additional `index_form_to_display_name` function to convert index
from to display form, heuristically.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
The `test_datacite_conversions` function will compare an input
(datacite) document to an expected output (release entity as JSON). This
way, it should not be too hard to add more cases by adding: input,
output - and by increasing the counter in the range loop within the
test.
To view input and result side by side with vim, change into the test
directory and run:
tests/files/datacite $ ./caseview.sh 18
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Current version succeeded to import a random sample of 100000 records
(0.5%) from datacite.
The --debug (write JSON to stdout) and --insert-log-file (log batch
before committing to db) flags are temporary added to help debugging.
Add few unit tests.
Some edge cases:
a) Existing keys without value requires a slightly awkward:
```
titles = attributes.get('titles', []) or []
```
b) There can be 0, 1, or more (first one wins) titles.
c) Date handling is probably not ideal. Datacite has a potentiall fine
grained list of dates.
The test case (tests/files/datacite_sample.jsonl) refers to
https://ssl.fao.org/glis/doi/10.18730/8DYM9, which has date (main
descriptor) 1986. The datacite record contains: 2017 (publicationYear,
probably the year of record creation with reference system), 1978-06-03
(collected, e.g. experimental sample), 1986 ("Accepted"). The online
version of the resource knows even one more date (2019-06-05 10:14:43 by
WIEWS update).
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
| |
- refs: article-title not title; save unstructured; authors not author
- save 'language' field (already an ISO code)
|
|
|
|
| |
this was resulting in a collision with default/example database objects.
|
| |
|
| |
|
|
|
|
|
|
| |
The motivation here isn't really to support these gigantic lists on
principle, but to be able to ingest large corpuses without having to
decide whether to filter out or crop such lists.
|
| |
|
| |
|