aboutsummaryrefslogtreecommitdiffstats
path: root/README.md
blob: 434aa6f671e412cffb90860c98b68909d35b01e3 (plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
# cgraph

----

Scholarly citation graph related code; maintained by
[martin@archive.org](mailto:martin@archive.org); multiple subprojects to keep
all relevant code close.

* python: mostly [luigi](https://github.com/spotify/luigi) tasks (using
  [shiv](https://github.com/linkedin/shiv) for single-file deployments)
* skate: various Go command line tools (packaged as deb)

Context: [fatcat](https://fatcat.wiki), "Mellon Grant" (20/21).

We use informal, internal versioning, currently v2, next will be v3.

# Grant related tasks

3/4 phases of the grant contain citation graph related tasks.

* [x] Link PID or DOI to archived versions

> As of v2, we have linkage between fatcat release entities by doi, pmid, pmcid, arxiv.

* [ ] URLs in corpus linked to best possible timestamp (GWB)

> CDX API probably good for sampling; we'll need to tap into `/user/wmdata2/cdx-all-index/` - (note: try pyspark)

* [ ] Harvest all URLs in citation corpus (maybe do a sample first)

> A seed-list (from refs; not from the full-text) is done; need to prepare a crawl and lookups in GWB.

* [ ] Links between records w/o DOI (fuzzy matching)

> As of v2, we do have a fuzzy matching procedure (yielding about 5-10% of the total results).

* [ ] Publication of augmented citation graph, explore data mining, etc.
* [ ] Interlinkage with other source, monographs, commercial publications, etc.

> As of v3, we have a minimal linkage with wikipedia.

* [ ] Wikipedia (en) references metadata or archived record

> This is ongoing and should be part of v3.

* [ ] Metadata records for often cited non-scholarly web publications
* [ ] Collaborations: I4OC, wikicite

We attended an online workshop in 09/2020, organized in part by OCI members;
recording: [fatcat five minute
intro](https://archive.org/details/fatcat_workshop_open_citations_open_scholarly_metadata_2020)

# TODO

* [ ] create a first index, ES7 [schema PR](https://git.archive.org/webgroup/fatcat/-/merge_requests/99)
* [ ] build API, [spec notes](https://git.archive.org/webgroup/fatcat/-/blob/10eb30251f89806cb7a0f147f427c5ea7e5f9941/proposals/2021-01-29_citation_api.md)

# IA Use Cases

* [ ] discovery tool, e.g. "cited by ..." link
* [ ] things citing this page/book/...
* [ ] metadata discovery; e.g. most cited w/o entry in catalog
* [ ] Turn All References Blue (TARB)

# Additional notes

* [https://docs.google.com/document/d/1vg_q0lxp6CrGGFS4rR06_TbiROh9nj7UV5NFvueLRn0/edit](https://docs.google.com/document/d/1vg_q0lxp6CrGGFS4rR06_TbiROh9nj7UV5NFvueLRn0/edit)

# Current status

```
$ refcat.pyz BiblioRefV2
```

* schema: [https://git.archive.org/webgroup/fatcat/-/blob/10eb30251f89806cb7a0f147f427c5ea7e5f9941/proposals/2021-01-29_citation_api.md#schemas](https://git.archive.org/webgroup/fatcat/-/blob/10eb30251f89806cb7a0f147f427c5ea7e5f9941/proposals/2021-01-29_citation_api.md#schemas)
* matches via: doi, arxiv, pmid, pmcid, fuzzy title matches
* 785,569,011 edges (~103% of 12/2020 OCI/crossref release), ~39G compressed, ~288G uncompressed

# Rough Notes

* [python/notes/version_0.md](python/notes/version_0.md)
* [python/notes/version_1.md](python/notes/version_1.md)
* [python/notes/version_2.md](python/notes/version_2.md)
* [python/notes/version_3.md](python/notes/version_3.md)