aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
authorMartin Czygan <martin.czygan@gmail.com>2021-07-09 13:26:35 +0200
committerMartin Czygan <martin.czygan@gmail.com>2021-07-09 13:26:35 +0200
commit002764b5b1f8f27bd8ae42d33b2a6f42a2a4b7a1 (patch)
tree38b7aa860202812f951ee5dc86da3a79200258ff
parentf9ef1c989b4f85c81ac5f24b08f0d636636e7a4b (diff)
parente05f4c4973fc3573d3707d4d90779fad094ced6f (diff)
downloadfuzzycat-002764b5b1f8f27bd8ae42d33b2a6f42a2a4b7a1.tar.gz
fuzzycat-002764b5b1f8f27bd8ae42d33b2a6f42a2a4b7a1.zip
Merge branch 'master' of git.archive.org:webgroup/fuzzycat
* 'master' of git.archive.org:webgroup/fuzzycat: simplify README for general audience; move some content to notes sandcrawler slugify: lower-case greek ambiguity (OCR) DOI clean/normalize helper; and use in verification etc verify: page count parsing and comparison improvements
-rw-r--r--README.md278
-rw-r--r--fuzzycat/cluster.py15
-rw-r--r--fuzzycat/grobid_unstructured.py3
-rw-r--r--fuzzycat/simple.py3
-rw-r--r--fuzzycat/utils.py32
-rw-r--r--fuzzycat/verify.py10
-rw-r--r--notes/old_pipeline.md177
-rw-r--r--tests/test_utils.py24
8 files changed, 318 insertions, 224 deletions
diff --git a/README.md b/README.md
index c70b8f3..b01776a 100644
--- a/README.md
+++ b/README.md
@@ -1,240 +1,98 @@
-# fuzzycat (wip)
-Fuzzy matching utilities for [fatcat](https://fatcat.wiki).
+<div align="center">
+<!-- Photo is CC BY 2.0 by Chika Watanabe from flickr -->
+<a href="https://www.flickr.com/photos/chikawatanabe/192112067">
+<img src="static/192112067_046be9fd21_b.jpg">
+</a>
+</div>
-![https://pypi.org/project/fuzzycat/](https://img.shields.io/pypi/v/fuzzycat?style=flat-square)
-
-To install with [pip](https://pypi.org/project/pip/), run:
-
-```
-$ pip install fuzzycat
-```
-
-![](static/192112067_046be9fd21_b.jpg)
-
-Photo by [Chika Watanabe](https://www.flickr.com/photos/chikawatanabe/192112067) (CC BY 2.0).
-
-## Overview
-
-The fuzzycat library currently works on [fatcat database release
-dumps](https://archive.org/details/fatcat_snapshots_and_exports?&sort=-publicdate)
-and can cluster similar release items, that is it can find clusters and can
-verify match candidates.
-
-For example we can identify:
-
-* versions of various items (arxiv, figshare, datacite, ...)
-* preprint and published pairs
-* similar items from different sources
-
-## TODO
-
-* [ ] take a list of title strings and return match candidates (faster than
- elasticsearch); e.g. derive a key and find similar keys some cached clusters
-* [ ] take a list of title, author documents and return match candidates; e.g.
- key may depend on title only, but verification can be more precise
-* [ ] take a more complete, yet partial document and return match candidates
+`fuzzycat`: bibliographic fuzzy matching for fatcat.wiki
+========================================================
-For this to work, we will need to have cluster from fatcat precomputed and
-cache. We also might want to have it sorted by key (which is a side effect of
-clustering) so we can binary search into the cluster file for the above todo
-items.
-
-## Dataset
-
-For development, we worked on a `release_export_expanded.json` dump (113G/700G
-zstd/plain, 154,203,375 lines) and with the [fatcat
-API](https://api.fatcat.wiki/).
-
-The development workflow looked something like the following.
-
-![](notes/steps.png)
-
-## Clustering
-
-Clustering derives sets of similar documents from a [fatcat database release
-dump](https://archive.org/details/fatcat_snapshots_and_exports?&sort=-publicdate).
-
-Following algorithms are implemented (or planned):
-
-* [x] exact title matches (title)
-* [x] normalized title matches (tnorm)
-* [x] NYSIIS encoded title matches (tnysi)
-* [x] extended title normalization (tsandcrawler)
-
-Example running clustering:
-
-```
-$ python -m fuzzycat cluster -t tsandcrawler < data/re.json | zstd -c -T0 > cluster.json.zst
-```
-
-Clustering works in a three step process:
-
-1. key extraction for each document (choose algorithm)
-2. sorting by keys (via [GNU sort](https://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html))
-3. group by key and write out ([itertools.groupby](https://docs.python.org/3/library/itertools.html#itertools.groupby))
-
-Note: For long running processes, this all-or-nothing approach is impractical;
-e.g. running clustering on the joint references and fatcat dataset (2B records)
-takes 24h+.
-
-Ideas:
-
-* [ ] make (sorted) key extraction a fast standalone thing
-
-> `cat data.jsonl | fuzzycat-key --algo X > data.key.tsv`
-
-Where `data.key` group (id, key, blob) or the like. Make this line speed (maybe
-w/ rust). Need to carry the blob, as we do not want to restrict options.
-
-
-## Verification
-
-Run verification (pairwise *double-check* of match candidates in a cluster).
-
-```
-$ time zstdcat -T0 sample_cluster.json.zst | python -m fuzzycat verify > sample_verify.txt
+![https://pypi.org/project/fuzzycat/](https://img.shields.io/pypi/v/fuzzycat?style=flat-square)
-real 7m56.713s
-user 8m50.703s
-sys 0m29.262s
-```
+This Python library contains routines for finding near-duplicate bibliographic
+entities (primarily research papers), and estimating whether two metadata
+records describe the same work (or variations of the same work). Some routines
+are designed to work "offline" with batches of billions of sorted metadata
+records, and others are designed to work "online" making queries against hosted
+web services and catalogs.
-This is a one-pass operation. For processing 150M docs, we very much depend on
-the documents being on disk in a file (we keep the complete document in the
-clustering result).
+`fuzzycat` was originally developed by Martin Czygan at the Internet Archive,
+and is used in the construction of a citation graph and to identify duplicate
+records in the [fatcat.wiki](https://fatcat.wiki) catalog and
+[scholar.archive.org](https://scholar.archive.org) search index.
-Example results:
+**DISCLAIMER:** this tool is still under development, as indicated by the "0"
+major version. The interface, semantics, and behavior are likely to be tweaked.
-```
-3450874 Status.EXACT Reason.TITLE_AUTHOR_MATCH
-2619990 Status.STRONG Reason.SLUG_TITLE_AUTHOR_MATCH
-2487633 Status.DIFFERENT Reason.YEAR
-2434532 Status.EXACT Reason.WORK_ID
-2085006 Status.DIFFERENT Reason.CONTRIB_INTERSECTION_EMPTY
-1397420 Status.DIFFERENT Reason.SHARED_DOI_PREFIX
-1355852 Status.DIFFERENT Reason.RELEASE_TYPE
-1290162 Status.AMBIGUOUS Reason.DUMMY
-1145511 Status.DIFFERENT Reason.BOOK_CHAPTER
-1009657 Status.DIFFERENT Reason.DATASET_DOI
- 996503 Status.STRONG Reason.PMID_DOI_PAIR
- 868951 Status.EXACT Reason.DATACITE_VERSION
- 796216 Status.STRONG Reason.DATACITE_RELATED_ID
- 704154 Status.STRONG Reason.FIGSHARE_VERSION
- 534963 Status.STRONG Reason.VERSIONED_DOI
- 343310 Status.STRONG Reason.TOKENIZED_AUTHORS
- 334974 Status.STRONG Reason.JACCARD_AUTHORS
- 293835 Status.STRONG Reason.PREPRINT_PUBLISHED
- 269366 Status.DIFFERENT Reason.COMPONENT
- 263626 Status.DIFFERENT Reason.SUBTITLE
- 224021 Status.AMBIGUOUS Reason.SHORT_TITLE
- 152990 Status.DIFFERENT Reason.PAGE_COUNT
- 133811 Status.AMBIGUOUS Reason.CUSTOM_PREFIX_10_5860_CHOICE_REVIEW
- 122600 Status.AMBIGUOUS Reason.CUSTOM_PREFIX_10_7916
- 79664 Status.STRONG Reason.CUSTOM_IEEE_ARXIV
- 46649 Status.DIFFERENT Reason.CUSTOM_PREFIX_10_14288
- 39797 Status.DIFFERENT Reason.JSTOR_ID
- 38598 Status.STRONG Reason.CUSTOM_BSI_UNDATED
- 18907 Status.STRONG Reason.CUSTOM_BSI_SUBDOC
- 15465 Status.EXACT Reason.DOI
- 13393 Status.DIFFERENT Reason.CUSTOM_IOP_MA_PATTERN
- 10378 Status.DIFFERENT Reason.CONTAINER
- 3081 Status.AMBIGUOUS Reason.BLACKLISTED
- 2504 Status.AMBIGUOUS Reason.BLACKLISTED_FRAGMENT
- 1273 Status.AMBIGUOUS Reason.APPENDIX
- 1063 Status.DIFFERENT Reason.TITLE_FILENAME
- 104 Status.DIFFERENT Reason.NUM_DIFF
- 4 Status.STRONG Reason.ARXIV_VERSION
-```
-## A full run
+## Quickstart
-Single threaded, 42h.
+Inside a `virtualenv` (or similar), install with [pip](https://pypi.org/project/pip/):
```
-$ time zstdcat -T0 release_export_expanded.json.zst | \
- TMPDIR=/bigger/tmp python -m fuzzycat cluster --tmpdir /bigger/tmp -t tsandcrawler | \
- zstd -c9 > cluster_tsandcrawler.json.zst
-{
- "key_fail": 0,
- "key_ok": 154202433,
- "key_empty": 942,
- "key_denylist": 0,
- "num_clusters": 124321361
-}
-
-real 2559m7.880s
-user 2605m41.347s
-sys 118m38.141s
-```
-
-So, 29881072 (about 20%) docs in the potentially duplicated set. Verification (about 15h w/o parallel):
-
+pip install fuzzycat
```
-$ time zstdcat -T0 cluster_tsandcrawler.json.zst | python -m fuzzycat verify | \
- zstd -c9 > cluster_tsandcrawler_verified_3c7378.tsv.zst
-...
+The `fuzzycat.simple` module contains high-level helpers which query Internet
+Archive hosted services:
-real 927m28.631s
-user 939m32.761s
-sys 36m47.602s
-```
+ import elasticsearch
+ from fuzzycat.simple import *
-----
+ es_client = elasticsearch.Elasticsearch("https://search.fatcat.wiki:443")
-# Misc
+ # parses reference using GROBID (at https://grobid.qa.fatcat.wiki),
+ # then queries Elasticsearch (at https://search.fatcat.wiki),
+ # then scores candidates against latest catalog record fetched from
+ # https://api.fatcat.wiki
+ best_match = closest_fuzzy_unstructured_match(
+ """Cunningham HB, Weis JJ, Taveras LR, Huerta S. Mesh migration following abdominal hernia repair: a comprehensive review. Hernia. 2019 Apr;23(2):235-243. doi: 10.1007/s10029-019-01898-9. Epub 2019 Jan 30. PMID: 30701369.""",
+ es_client=es_client)
-## Use cases
+ print(best_match)
+ # FuzzyReleaseMatchResult(status=<Status.EXACT: 'exact'>, reason=<Reason.DOI: 'doi'>, release={...})
-* [ ] take a release entity database dump as JSON lines and cluster releases
- (according to various algorithms)
-* [ ] take cluster information and run a verification step (misc algorithms)
-* [ ] create a dataset that contains grouping of releases under works
-* [ ] command line tools to generate cache keys, e.g. to match reference
- strings to release titles (this needs some transparent setup, e.g. filling of
-a cache before ops)
+ # same as above, but without the GROBID parsing, and returns multiple results
+ matches = close_fuzzy_biblio_matches(
+ dict(
+ title="Mesh migration following abdominal hernia repair: a comprehensive review",
+ first_author="Cunningham",
+ year=2019,
+ journal="Hernia",
+ ),
+ es_client=es_client,
+ )
-## Usage
+A CLI tool is included for processing records in UNIX stdin/stdout pipelines:
-Release clusters start with release entities json lines.
+ # print usage
+ python -m fuzzycat
-```shell
-$ cat data/sample.json | python -m fuzzycat cluster -t title > out.json
-```
-Clustering 1M records (single core) takes about 64s (15K docs/s).
+## Features and Use-Cases
-```shell
-$ head -1 out.json
-{
- "k": "裏表紙",
- "v": [
- ...
- ]
-}
-```
+The **`cgraph`** system builds on top of this library to build a citation graph
+by processing billions of structured and unstructured reference records
+extracted from scholarly papers.
-Using GNU parallel to make it faster.
+Automated imports of metadata records into the fatcat catalog use fuzzycat to
+filter new metadata which look like duplicates of existing records from other
+sources.
-```
-$ cat data/sample.json | parallel -j 8 --pipe --roundrobin python -m fuzzycat.main cluster -t title
-```
+In conjunction with standard command-line tools (like `sort`), fatcat bulk
+metadata snapshots can be clustered and reduced into groups to flag duplicate
+records for merging.
-Interestingly, the parallel variants detects fewer clusters (because data is
-split and clusters are searched within each batch). TODO(miku): sort out sharding bug.
+Extracted reference strings from any source (webpages, books, papers, wikis,
+databases, etc) can be resolved against the fatcat catalog of scholarly papers.
-# Notes on Refs
-* technique from fuzzycat ported in parts to
- [skate](https://github.com/miku/skate) - to go from refs and release dataset
-to a number of clusters, relating references to releases
-* need to verify, but not the references against each other, only refs againt the release
+## Support and Acknowledgements
-# Notes on Performance
+Work on this software received support from the Andrew W. Mellon Foundation
+through multiple phases of the ["Ensuring the Persistent Access of Open Access
+Journal Literature"](https://mellon.org/grants/grants-database/advanced-search/?amount-low=&amount-high=&year-start=&year-end=&city=&state=&country=&q=%22Ensuring+the+Persistent+Access%22&per_page=25) project (see [original announcement](http://blog.archive.org/2018/03/05/andrew-w-mellon-foundation-awards-grant-to-the-internet-archive-for-long-tail-journal-preservation/)).
-While running bulk (1B+) clustering and verification, even with parallel,
-fuzzycat got slow. The citation graph project therefore contains a
-reimplementation of `fuzzycat.verify` and related functions in Go, which in
-this case is an order of magnitude faster. See:
-[skate](https://git.archive.org/martin/cgraph/-/tree/master/skate).
+Additional acknowledgements [at fatcat.wiki](https://fatcat.wiki/about).
diff --git a/fuzzycat/cluster.py b/fuzzycat/cluster.py
index 4e70bdd..c8384c1 100644
--- a/fuzzycat/cluster.py
+++ b/fuzzycat/cluster.py
@@ -151,10 +151,20 @@ SANDCRAWLER_CHAR_MAP = {
'\N{Latin capital letter T with stroke}': 'T',
'\N{Latin small letter t with stroke}': 't',
- # bnewbold additions
+ # bnewbold additions; mostly Latin-ish OCR ambiguous
'\N{MICRO SIGN}': 'u',
'\N{LATIN SMALL LETTER C}': 'c',
'\N{LATIN SMALL LETTER F WITH HOOK}': 'f',
+ '\N{Greek Small Letter Alpha}': 'a',
+ '\N{Greek Small Letter Beta}': 'b',
+ '\N{Greek Small Letter Iota}': 'i',
+ '\N{Greek Small Letter Kappa}': 'k',
+ '\N{Greek Small Letter Chi}': 'x',
+ '\N{Greek Small Letter Upsilon}': 'u',
+ '\N{Greek Small Letter Nu}': 'v',
+ '\N{Greek Small Letter Gamma}': 'y',
+ '\N{Greek Small Letter Tau}': 't',
+ '\N{Greek Small Letter Omicron}': 'o',
# bnewbold map-to-null (for non-printing stuff not in the regex)
'\N{PARTIAL DIFFERENTIAL}': '',
'\N{LATIN LETTER INVERTED GLOTTAL STOP}': '',
@@ -193,7 +203,7 @@ def sandcrawler_slugify(raw: str) -> str:
slug = slug.replace("&apos;", "'")
# iterate over all chars and replace from map, if in map; then lower-case again
- slug = ''.join([SANDCRAWLER_CHAR_MAP.get(c, c) for c in slug])
+ slug = ''.join([SANDCRAWLER_CHAR_MAP.get(c, c) for c in slug]).lower()
# early bailout before executing regex
if not slug:
@@ -217,6 +227,7 @@ def test_sandcrawler_slugify() -> None:
("علمية", "علمية"),
("期刊的数字", "期刊的数字"),
("les pré-impressions explorées à partir", "lespreimpressionsexploreesapartir"),
+ ("γ-Globulin", "yglobulin"),
# "MICRO SIGN"
("\xb5meter", "umeter"),
diff --git a/fuzzycat/grobid_unstructured.py b/fuzzycat/grobid_unstructured.py
index 79c39d3..5462ae1 100644
--- a/fuzzycat/grobid_unstructured.py
+++ b/fuzzycat/grobid_unstructured.py
@@ -18,6 +18,7 @@ from fatcat_openapi_client import ReleaseContrib, ReleaseEntity, ReleaseExtIds
from fuzzycat.config import settings
from fuzzycat.grobid2json import biblio_info
+from fuzzycat.utils import clean_doi
GROBID_API_BASE = settings.get("GROBID_API_BASE", "https://grobid.qa.fatcat.wiki")
@@ -89,7 +90,7 @@ def grobid_ref_to_release(ref: dict) -> ReleaseEntity:
issue=ref.get("issue"),
pages=ref.get("pages"),
ext_ids=ReleaseExtIds(
- doi=ref.get("doi"),
+ doi=clean_doi(ref.get("doi")),
pmid=ref.get("pmid"),
pmcid=ref.get("pmcid"),
arxiv=ref.get("arxiv_id"),
diff --git a/fuzzycat/simple.py b/fuzzycat/simple.py
index c78ac28..8b206b1 100644
--- a/fuzzycat/simple.py
+++ b/fuzzycat/simple.py
@@ -26,6 +26,7 @@ from fuzzycat.entities import entity_to_dict
from fuzzycat.grobid_unstructured import grobid_parse_unstructured
from fuzzycat.matching import match_release_fuzzy
from fuzzycat.verify import verify
+from fuzzycat.utils import clean_doi
@dataclass
@@ -184,7 +185,7 @@ def biblio_to_release(biblio: dict) -> ReleaseEntity:
release = ReleaseEntity(
title=biblio.get("title"),
ext_ids=ReleaseExtIds(
- doi=biblio.get("doi"),
+ doi=clean_doi(biblio.get("doi")),
pmid=biblio.get("pmid"),
pmcid=biblio.get("pmcid"),
arxiv=biblio.get("arxiv_id"),
diff --git a/fuzzycat/utils.py b/fuzzycat/utils.py
index d37ee32..a1c5124 100644
--- a/fuzzycat/utils.py
+++ b/fuzzycat/utils.py
@@ -6,6 +6,7 @@ import re
import string
import subprocess
import tempfile
+from typing import Optional
import requests
from glom import PathAccessError, glom
@@ -35,20 +36,32 @@ def es_compat_hits_total(resp):
def parse_page_string(s):
"""
- Parse typical page strings, e.g. 150-180.
+ Parse typical page strings, e.g. 150-180 or p123.
+
+ If only a single page number is found, returns that first page and None for
+ end page and count. If two are found, and they are consistent as a range,
+ returns the start, end, and count.
+
+ Does not handle lists of page numbers, roman numerals, and several other
+ patterns.
"""
if not s:
raise ValueError('page parsing: empty string')
+ if s[0].lower() in ('p', 'e'):
+ s = s[1:]
if s.isnumeric():
- return ParsedPages(start=int(s), end=int(s), count=1)
+ return ParsedPages(start=int(s), end=None, count=None)
page_pattern = re.compile("([0-9]{1,})-([0-9]{1,})")
match = page_pattern.match(s)
if not match:
raise ValueError('cannot parse page pattern from {}'.format(s))
start, end = match.groups()
if len(end) == 1 and start and start[-1] < end:
- # 261-5, odd, but happens
+ # '261-5', odd, but happens
end = start[:-1] + end
+ elif len(end) == 2 and start and start[-2:] < end:
+ # '577-89', also happens
+ end = start[:-2] + end
a, b = int(start), int(end)
if a > b:
raise ValueError('invalid page range: {}'.format(s))
@@ -68,6 +81,19 @@ def dict_key_exists(doc, path):
else:
return True
+def clean_doi(raw: Optional[str]) -> Optional[str]:
+ if not raw:
+ return None
+ raw = raw.strip().lower()
+ if raw.startswith("doi:"):
+ raw = raw[4:]
+ if not "10." in raw:
+ return None
+ if not raw.startswith("10."):
+ raw = raw[raw.find("10."):]
+ if raw[7:9] == "//":
+ raw = raw[:8] + raw[9:]
+ return raw
def doi_prefix(v):
"""
diff --git a/fuzzycat/verify.py b/fuzzycat/verify.py
index 45a809e..1eeea40 100644
--- a/fuzzycat/verify.py
+++ b/fuzzycat/verify.py
@@ -92,7 +92,7 @@ from fuzzycat.data import (CONTAINER_NAME_BLACKLIST, PUBLISHER_BLACKLIST, TITLE_
from fuzzycat.entities import entity_to_dict
from fuzzycat.utils import (author_similarity_score, contains_chemical_formula, dict_key_exists,
doi_prefix, has_doi_prefix, jaccard, num_project, parse_page_string,
- slugify_string)
+ slugify_string, clean_doi)
Verify = collections.namedtuple("Verify", "status reason")
@@ -167,8 +167,8 @@ def verify(a: Dict, b: Dict, min_title_length=5) -> Tuple[str, str]:
# A few items have the same DOI.
try:
- a_doi = glom(a, "ext_ids.doi")
- b_doi = glom(b, "ext_ids.doi")
+ a_doi = clean_doi(glom(a, "ext_ids.doi"))
+ b_doi = clean_doi(glom(b, "ext_ids.doi"))
if a_doi is not None and a_doi == b_doi:
return Verify(Status.EXACT, Reason.DOI)
except PathAccessError:
@@ -597,7 +597,9 @@ def verify(a: Dict, b: Dict, min_title_length=5) -> Tuple[str, str]:
try:
a_parsed_pages = parse_page_string(glom(a, "pages"))
b_parsed_pages = parse_page_string(glom(b, "pages"))
- if abs(a_parsed_pages.count - b_parsed_pages.count) > 5:
+ if (a_parsed_pages.count != None
+ and b_parsed_pages.count != None
+ and abs(a_parsed_pages.count - b_parsed_pages.count) > 5):
return Verify(Status.DIFFERENT, Reason.PAGE_COUNT)
except (ValueError, PathAccessError):
pass
diff --git a/notes/old_pipeline.md b/notes/old_pipeline.md
new file mode 100644
index 0000000..2f84d66
--- /dev/null
+++ b/notes/old_pipeline.md
@@ -0,0 +1,177 @@
+
+## Performance
+
+For development, we worked on a `release_export_expanded.json` dump (113G/700G zstd/plain, 154,203,375 lines) and with the [fatcat API](https://api.fatcat.wiki/).
+
+
+### Clustering
+
+Clustering derives sets of similar documents from a [fatcat database release
+dump](https://archive.org/details/fatcat_snapshots_and_exports?&sort=-publicdate).
+
+
+Example running clustering:
+
+```
+$ python -m fuzzycat cluster -t tsandcrawler < data/re.json | zstd -c -T0 > cluster.json.zst
+```
+
+Clustering works in a three step process:
+
+1. key extraction for each document (choose algorithm)
+2. sorting by keys (via [GNU sort](https://www.gnu.org/software/coreutils/manual/html_node/sort-invocation.html))
+3. group by key and write out ([itertools.groupby](https://docs.python.org/3/library/itertools.html#itertools.groupby))
+
+Note: For long running processes, this all-or-nothing approach is impractical;
+e.g. running clustering on the joint references and fatcat dataset (2B records)
+takes 24h+.
+
+Ideas:
+
+* [ ] make (sorted) key extraction a fast standalone thing
+
+> `cat data.jsonl | fuzzycat-key --algo X > data.key.tsv`
+
+Where `data.key` group (id, key, blob) or the like. Make this line speed (maybe
+w/ rust). Need to carry the blob, as we do not want to restrict options.
+
+
+## Verification
+
+Run verification (pairwise *double-check* of match candidates in a cluster).
+
+```
+$ time zstdcat -T0 sample_cluster.json.zst | python -m fuzzycat verify > sample_verify.txt
+
+real 7m56.713s
+user 8m50.703s
+sys 0m29.262s
+```
+
+This is a one-pass operation. For processing 150M docs, we very much depend on
+the documents being on disk in a file (we keep the complete document in the
+clustering result).
+
+Example results:
+
+```
+3450874 Status.EXACT Reason.TITLE_AUTHOR_MATCH
+2619990 Status.STRONG Reason.SLUG_TITLE_AUTHOR_MATCH
+2487633 Status.DIFFERENT Reason.YEAR
+2434532 Status.EXACT Reason.WORK_ID
+2085006 Status.DIFFERENT Reason.CONTRIB_INTERSECTION_EMPTY
+1397420 Status.DIFFERENT Reason.SHARED_DOI_PREFIX
+1355852 Status.DIFFERENT Reason.RELEASE_TYPE
+1290162 Status.AMBIGUOUS Reason.DUMMY
+1145511 Status.DIFFERENT Reason.BOOK_CHAPTER
+1009657 Status.DIFFERENT Reason.DATASET_DOI
+ 996503 Status.STRONG Reason.PMID_DOI_PAIR
+ 868951 Status.EXACT Reason.DATACITE_VERSION
+ 796216 Status.STRONG Reason.DATACITE_RELATED_ID
+ 704154 Status.STRONG Reason.FIGSHARE_VERSION
+ 534963 Status.STRONG Reason.VERSIONED_DOI
+ 343310 Status.STRONG Reason.TOKENIZED_AUTHORS
+ 334974 Status.STRONG Reason.JACCARD_AUTHORS
+ 293835 Status.STRONG Reason.PREPRINT_PUBLISHED
+ 269366 Status.DIFFERENT Reason.COMPONENT
+ 263626 Status.DIFFERENT Reason.SUBTITLE
+ 224021 Status.AMBIGUOUS Reason.SHORT_TITLE
+ 152990 Status.DIFFERENT Reason.PAGE_COUNT
+ 133811 Status.AMBIGUOUS Reason.CUSTOM_PREFIX_10_5860_CHOICE_REVIEW
+ 122600 Status.AMBIGUOUS Reason.CUSTOM_PREFIX_10_7916
+ 79664 Status.STRONG Reason.CUSTOM_IEEE_ARXIV
+ 46649 Status.DIFFERENT Reason.CUSTOM_PREFIX_10_14288
+ 39797 Status.DIFFERENT Reason.JSTOR_ID
+ 38598 Status.STRONG Reason.CUSTOM_BSI_UNDATED
+ 18907 Status.STRONG Reason.CUSTOM_BSI_SUBDOC
+ 15465 Status.EXACT Reason.DOI
+ 13393 Status.DIFFERENT Reason.CUSTOM_IOP_MA_PATTERN
+ 10378 Status.DIFFERENT Reason.CONTAINER
+ 3081 Status.AMBIGUOUS Reason.BLACKLISTED
+ 2504 Status.AMBIGUOUS Reason.BLACKLISTED_FRAGMENT
+ 1273 Status.AMBIGUOUS Reason.APPENDIX
+ 1063 Status.DIFFERENT Reason.TITLE_FILENAME
+ 104 Status.DIFFERENT Reason.NUM_DIFF
+ 4 Status.STRONG Reason.ARXIV_VERSION
+```
+
+## A full run
+
+Single threaded, 42h.
+
+```
+$ time zstdcat -T0 release_export_expanded.json.zst | \
+ TMPDIR=/bigger/tmp python -m fuzzycat cluster --tmpdir /bigger/tmp -t tsandcrawler | \
+ zstd -c9 > cluster_tsandcrawler.json.zst
+{
+ "key_fail": 0,
+ "key_ok": 154202433,
+ "key_empty": 942,
+ "key_denylist": 0,
+ "num_clusters": 124321361
+}
+
+real 2559m7.880s
+user 2605m41.347s
+sys 118m38.141s
+```
+
+So, 29881072 (about 20%) docs in the potentially duplicated set. Verification (about 15h w/o parallel):
+
+```
+$ time zstdcat -T0 cluster_tsandcrawler.json.zst | python -m fuzzycat verify | \
+ zstd -c9 > cluster_tsandcrawler_verified_3c7378.tsv.zst
+
+...
+
+real 927m28.631s
+user 939m32.761s
+sys 36m47.602s
+```
+
+----
+
+# Misc
+
+
+## Usage
+
+Release clusters start with release entities json lines.
+
+```shell
+$ cat data/sample.json | python -m fuzzycat cluster -t title > out.json
+```
+
+Clustering 1M records (single core) takes about 64s (15K docs/s).
+
+```shell
+$ head -1 out.json
+{
+ "k": "裏表紙",
+ "v": [
+ ...
+ ]
+}
+```
+
+Using GNU parallel to make it faster.
+
+```
+$ cat data/sample.json | parallel -j 8 --pipe --roundrobin python -m fuzzycat.main cluster -t title
+```
+
+Interestingly, the parallel variants detects fewer clusters (because data is
+split and clusters are searched within each batch). TODO(miku): sort out sharding bug.
+
+# Notes on Refs
+
+* technique from fuzzycat ported in parts to [skate](https://github.com/miku/skate) - to go from refs and release dataset to a number of clusters, relating references to releases
+* need to verify, but not the references against each other, only refs againt the release
+
+# Notes on Performance
+
+While running bulk (1B+) clustering and verification, even with parallel,
+fuzzycat got slow. The citation graph project therefore contains a
+reimplementation of `fuzzycat.verify` and related functions in Go, which in
+this case is an order of magnitude faster. See:
+[skate](https://git.archive.org/martin/cgraph/-/tree/master/skate).
diff --git a/tests/test_utils.py b/tests/test_utils.py
index 381c44e..21b85a4 100644
--- a/tests/test_utils.py
+++ b/tests/test_utils.py
@@ -3,7 +3,7 @@ import os
from fuzzycat.utils import (author_similarity_score, cut, jaccard, nwise, slugify_string,
token_n_grams, tokenize_string, parse_page_string, dict_key_exists,
- zstdlines, es_compat_hits_total)
+ zstdlines, es_compat_hits_total, clean_doi)
def test_slugify_string():
@@ -77,15 +77,20 @@ def test_dict_key_exists():
def test_page_page_string():
- reject = ("", "123-2", "123-120", "123a-124", "-2-1")
+ reject = ("", "123-2", "123-120", "123a-124", "-2-1", "I-II", "xv-xvi", "p")
for s in reject:
with pytest.raises(ValueError):
assert parse_page_string(s)
- assert parse_page_string("123") == (123, 123, 1)
+ assert parse_page_string("123") == (123, None, None)
+ assert parse_page_string("90-90") == (90, 90, 1)
assert parse_page_string("123-5") == (123, 125, 3)
assert parse_page_string("123-125") == (123, 125, 3)
assert parse_page_string("123-124a") == (123, 124, 2)
assert parse_page_string("1-1000") == (1, 1000, 1000)
+ assert parse_page_string("p55") == (55, None, None)
+ assert parse_page_string("p55-65") == (55, 65, 11)
+ assert parse_page_string("e1234") == (1234, None, None)
+ assert parse_page_string("577-89") == (577, 589, 13)
def test_zstdlines():
@@ -118,3 +123,16 @@ def test_es_compat_hits_total():
)
for r, expected in cases:
assert es_compat_hits_total(r) == expected
+
+def test_clean_doi():
+ assert clean_doi(None) == None
+ assert clean_doi("blah") == None
+ assert clean_doi("10.1234/asdf ") == "10.1234/asdf"
+ assert clean_doi("10.1037//0002-9432.72.1.50") == "10.1037/0002-9432.72.1.50"
+ assert clean_doi("10.1037/0002-9432.72.1.50") == "10.1037/0002-9432.72.1.50"
+ assert clean_doi("http://doi.org/10.1234/asdf ") == "10.1234/asdf"
+ # GROBID mangled DOI
+ assert clean_doi("21924DOI10.1234/asdf ") == "10.1234/asdf"
+ assert clean_doi("https://dx.doi.org/10.1234/asdf ") == "10.1234/asdf"
+ assert clean_doi("doi:10.1234/asdf ") == "10.1234/asdf"
+ assert clean_doi("10.7326/M20-6817") == "10.7326/m20-6817"