aboutsummaryrefslogtreecommitdiffstats
path: root/notes/bulk_edits/2019-12-20_updates.md
blob: 526a0f02a466344d1fcfbd881bb609d7555b3414 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36

## Arxiv

Used metha-sync tool to update. Then went in raw storage directory (as opposed
to using `metha-cat`) and plucked out weekly files updated since last import.
Created a tarball and uploaded to:

    https://archive.org/download/arxiv_raw_oai_snapshot_2019-05-22/arxiv_20190522_20191220.tar.gz

Downloaded, extracted, then unzipped:

    gunzip *.gz

Run importer:

    export FATCAT_AUTH_WORKER_ARXIV=...

    ./fatcat_import.py --batch-size 100 arxiv /srv/fatcat/datasets/arxiv_20190522_20191220/2019-05-31-00000000.xml
    # Counter({'exists': 1785, 'total': 1001, 'insert': 549, 'skip': 1, 'update': 0})

    fd .xml /srv/fatcat/datasets/arxiv_20190522_20191220/ | parallel -j15 ./fatcat_import.py --batch-size 100 arxiv {}

Things seem to run smoothly in QA. New releases get grouped with old works
correctly, no duplication obvious.

In prod, loaded just the first file as a start, waiting to see if auto-ingest
happens. Looks like yes! Great that everything is so smooth. All seem to be new
captures.

In production prod elasticsearch, 2,377,645 arxiv releases before this
updated import, 741,033 with files attached. Guessing about 150k new releases,
but will check.

Up to 2,531,542 arxiv releases, so only 154k or so new releases created.
781,122 with fulltext.