aboutsummaryrefslogtreecommitdiffstats
path: root/proposals/kafka_update_pipeline.md
diff options
context:
space:
mode:
authorBryan Newbold <bnewbold@archive.org>2020-10-16 21:08:59 -0700
committerBryan Newbold <bnewbold@archive.org>2020-10-16 21:08:59 -0700
commit495c82bf5da53eabc544a36d2128d3d305223702 (patch)
tree9e4dfb5572fe2b796c658456db1f5546f580730e /proposals/kafka_update_pipeline.md
parentf7fcb8feb363c4441cf3b37cc3811ca302ad9cca (diff)
parent31338c53df780cf04318ffceaa033f17614a6d62 (diff)
downloadfatcat-scholar-495c82bf5da53eabc544a36d2128d3d305223702.tar.gz
fatcat-scholar-495c82bf5da53eabc544a36d2128d3d305223702.zip
Merge branch 'bnewbold-update-workers'
Diffstat (limited to 'proposals/kafka_update_pipeline.md')
-rw-r--r--proposals/kafka_update_pipeline.md62
1 files changed, 62 insertions, 0 deletions
diff --git a/proposals/kafka_update_pipeline.md b/proposals/kafka_update_pipeline.md
new file mode 100644
index 0000000..a953d9c
--- /dev/null
+++ b/proposals/kafka_update_pipeline.md
@@ -0,0 +1,62 @@
+
+Want to receive a continual stream of updates from both fatcat and SIM
+scanning; index the updated content; and push into elasticsearch.
+
+
+## Filtering and Affordances
+
+The `updated` and `fetched` timestamps are not immediately necessary or
+implemented, but they can be used to filter updates. For example, after
+re-loading from a build entity dump, could "roll back" update pipeline to only
+fatcat (work) updates after the changelog index that the bulk dump is stamped
+with.
+
+At least in theory, the `fetched` timestamp could be used to prevent re-updates
+of existing documents in the ES index.
+
+The `doc_index_ts` timestamp in the ES index could be used in a future
+fetch-and-reindex worker to select documents for re-indexing, or to delete
+old/stale documents (eg, after SIM issue re-indexing if there were spurious
+"page" type documents remaining).
+
+## Message Types
+
+Scholar Update Request JSON
+- `key`: str - `type`: str
+ - `fatcat_work`
+ - `sim_issue`
+- `updated`: datetime, UTC, of event resulting in this request
+- `work_ident`: str (works)
+- `fatcat_changelog`: int (works)
+- `sim_item`: str (items)
+
+"Heavy Intermediate" JSON (existing schema)
+- key
+- `fetched`: Optional[datetime], UTC, when this doc was collected
+
+Scholar Fulltext ES JSON (existing schema)
+
+
+## Kafka Topics
+
+fatcat-ENV.work-ident-updates
+ 6x, long retention, key compaction
+ key: doc ident
+scholar-ENV.sim-updates
+ 6x, long retention, key compaction
+ key: doc ident
+scholar-ENV.update-docs
+ 12x, short retention (2 months?)
+ key: doc ident
+
+## Workers
+
+scholar-fetch-docs-worker
+ consumes fatcat and/or sim update requests, individually
+ constructs heavy intermediate
+ publishes to update-docs topic
+
+scholar-index-docs-worker
+ consumes updated "heavy intermediate" documents, in batches
+ transforms to elasticsearch schema
+ updates elasticsearch