summaryrefslogtreecommitdiffstats
path: root/python/fatcat_tools/workers
Commit message (Collapse)AuthorAgeFilesLines
* typing: add assertions to fatcat_tool code to make type assumptions explicitBryan Newbold2021-11-031-0/+1
|
* typing: add annotations to remaining fatcat_tools codeBryan Newbold2021-11-033-51/+70
| | | | | Again, these are just annotations, no changes made to get type checks to pass
* re-fix some lint issues after big 'fmt'Bryan Newbold2021-11-021-2/+3
|
* fmt (black): fatcat_tools/Bryan Newbold2021-11-023-196/+263
|
* python: isort everythingBryan Newbold2021-11-021-1/+2
|
* hacks to work around new pylint false positivesBryan Newbold2021-11-021-2/+3
|
* cleanup imports after fatcat_tools.transforms changeBryan Newbold2021-11-021-5/+8
|
* re-fmt all the fatcat_tools __init__ files for readabilityBryan Newbold2021-11-021-3/+6
|
* changelog worker: fix file/fileset typo, caught by lintBryan Newbold2021-05-251-1/+1
| | | | | This would have been resulting in some releases not getting re-indexed into search.
* es worker: ensure kafka messages get clearedBryan Newbold2021-04-121-0/+2
|
* es indexing: more 'wip' fixesBryan Newbold2021-04-121-1/+5
|
* ES indexing: skip 'wip' entities with a warningBryan Newbold2021-04-121-11/+16
|
* container ES index worker: support for querying statusBryan Newbold2021-04-061-5/+32
|
* indexing: don't use document namesBryan Newbold2021-04-061-14/+4
|
* entity update worker: treat fileset and webcapture updates like file updatesBryan Newbold2020-12-161-3/+25
| | | | | | | | | When webcapture or fileset entities are updated, then the release entities associated with them also need to be updated (and work entities, recursively). A TODO is to handle the case where a release_id is *removed* as well as *added*, and reprocess the releases in that case as well.
* entity updates: don't ingest JSTOR DOI prefixesBryan Newbold2020-10-231-0/+2
|
* entity updater: new work update feed (ident and changelog metadata only)Bryan Newbold2020-10-161-2/+24
|
* ingest: default to crawl protocols.io DOIsBryan Newbold2020-09-101-0/+2
|
* entity updater: handle doi=None case betterBryan Newbold2020-08-141-1/+1
|
* entity updater: es['publisher_type'] not always setBryan Newbold2020-08-141-1/+1
| | | | This is a small bugfix for a production issue.
* entity update: change big5 ingest behaviorBryan Newbold2020-08-111-9/+15
| | | | | | | | | In addition to changing the OA default, this was the main intended behavior change in this group of commits: want to ingest fewer attempts that we *expect* to fail, but default to ingest/crawl attempt if we are uncertain. This is because there is a long tail of journals that register DOIs and are defacto OA (fulltext is available), but we don't have metadata indicating them as such.
* entity update: default to ingest non-OA worksBryan Newbold2020-08-111-9/+10
|
* entity update: skip ingest of figshare+zenodo 'group' DOIsBryan Newbold2020-08-111-0/+15
|
* update crawl blocklist for SPNv2 requests which mostly failBryan Newbold2020-08-101-2/+10
|
* lint (flake8) tool python filesBryan Newbold2020-07-013-12/+0
|
* more changelog ES fixesBryan Newbold2020-04-171-4/+6
|
* ES changelog worker: fixes for ident; fetch update from API if neededBryan Newbold2020-04-171-2/+9
| | | | | The API fetch update may be needed for old changelog entries in the kafka feed.
* Merge branch 'martin-changelog-to-es' into 'master'bnewbold2020-04-172-2/+23
|\ | | | | | | | | derive changelog worker from release worker See merge request webgroup/fatcat!43
| * derive changelog worker from release workerMartin Czygan2020-04-172-2/+23
| | | | | | | | | | Early versions of changelog entries may not have all the fields required for the current transform.
* | changelog: limit typesMartin Czygan2020-04-161-5/+1
| | | | | | | | | | No partial docs (e.g. abstract), too generic components and entries, not HTML blogs.
* | changelog: extend release_types considered documentsMartin Czygan2020-04-161-10/+19
|/ | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | | according to release_rev.release_type, we have 29 values: fatcat_prod=# select release_type, count(release_type) from release_rev group by release_type; release_type | count -------------------+----------- abstract | 2264 article | 6371076 article-journal | 101083841 article-newspaper | 17062 book | 1676941 chapter | 13914854 component | 58990 dataset | 6860325 editorial | 133573 entry | 1628487 graphic | 1809471 interview | 19898 legal_case | 3581 legislation | 1626 letter | 275119 paper-conference | 6074669 peer_review | 30581 post | 245807 post-weblog | 135 report | 1010699 retraction | 1292 review-book | 96219 software | 316 song | 24027 speech | 4263 standard | 312364 stub | 1036813 thesis | 414397 | 0 (29 rows)
* ingest: more DOI patterns to treat as OABryan Newbold2020-03-281-0/+26
| | | | | | | These are journal/publisher patterns which we suspect to actually be OA based on the large quantity of papers that crawl successfully. The better long-term solution will be to flag containers in some way as OA (or "should crawl"), but this is a good short-term solution.
* ingest: always try some lancet journalsBryan Newbold2020-03-191-0/+3
|
* entity worker: ingest more releasesBryan Newbold2020-02-221-1/+37
| | | | | | | | If release is a dataset or image, don't do a pdf ingest request. If release is a datacite DOI, and release_type is a "document", crawl regardless of is_oa detection. This is mostly to crawl repositories (institutional or subject).
* always crawl researchgate DOIsBryan Newbold2020-02-181-0/+2
| | | | Now that ingest is fixed
* add acceptlist override for biorxiv/medrxivBryan Newbold2020-02-101-2/+12
|
* fix KafkaError worker reporting for partition errorsBryan Newbold2020-01-292-2/+2
|
* additional DOI prefix filtersBryan Newbold2020-01-281-0/+8
| | | | From martin, thanks.
* apply ingest request filtering in entity workerBryan Newbold2020-01-281-3/+34
| | | | | | | `ingest_oa_only` behavior, and other filters, now handled in the entity update worker, instead of in the transform function. Also add a DOI prefix blocklist feature.
* update ingest request schemaBryan Newbold2019-12-131-1/+1
| | | | | This is mostly changing ingest_type from 'file' to 'pdf', and adding 'link_source'/'link_source_id', plus some small cleanups.
* project -> ingest_request_sourceBryan Newbold2019-11-151-1/+1
|
* add ingest request feature to entity_updates workerBryan Newbold2019-11-151-4/+20
| | | | | | | | | | | | | Initially was going to create a new worker to consume from the release update channel, but couldn't get the edit context ("is this a new release, or update to an existing") from that context. Currently there is a flag in source code to control whether we only do OA releases or all releases. Starting with OA only to start slow, but should probably default to all, and make this a config flag. Should probably also have a config flag to control this entire feature. Tested locally in dev.
* review/fix all confluent-kafka produce codeBryan Newbold2019-09-202-12/+26
|
* small fixes to confluent-kafka importers/workersBryan Newbold2019-09-203-12/+41
| | | | | | | | - decrease default changelog pipeline to 5.0sec - fix missing KafkaException harvester imports - more confluent-kafka tweaks - updates to kafka consumer configs - bump elastic updates consumergroup (again)
* convert pipeline workers from pykafka to confluent-kafkaBryan Newbold2019-09-203-125/+230
|
* refactor all python source for client lib nameBryan Newbold2019-09-052-3/+3
|
* start new ES container worker kafka groupBryan Newbold2019-07-311-0/+2
| | | | | | | | The previous group seems to have gotten corrupted; my hypothesis is that this is due to pykafka being somewhat flakey, and am planning to move to librdkafka anyways. Re-indexing all the containers is pretty small/easy, so starting a new consumer group works find in this case; release indexer would be a bigger problem.
* fix typo in typoBryan Newbold2019-06-241-1/+1
|
* fix typo in changelog workerBryan Newbold2019-06-241-1/+1
|
* more links on new homepageBryan Newbold2019-06-192-2/+2
| | | | | matching produce sizes. may want to tweak this config in the future for throughput.