aboutsummaryrefslogtreecommitdiffstats
path: root/README.md
diff options
context:
space:
mode:
Diffstat (limited to 'README.md')
-rw-r--r--README.md54
1 files changed, 35 insertions, 19 deletions
diff --git a/README.md b/README.md
index e53e775..afe1ff6 100644
--- a/README.md
+++ b/README.md
@@ -6,34 +6,50 @@
\ooooooo| |___/\__,_|_| |_|\__,_|\___|_| \__,_| \_/\_/ |_|\___|_|
-This repo contains hadoop tasks (mapreduce and pig), luigi jobs, and other
-scripts and code for the internet archive (web group) journal ingest pipeline.
+This repo contains back-end python workers, scripts, hadoop jobs, luigi tasks,
+and other scripts and code for the Internet Archive web group's journal ingest
+pipeline. This code is of mixed quality and is mostly experimental. The goal
+for most of this is to submit metadata to [fatcat](https://fatcat.wiki), which
+is the more stable, maintained, and public-facing service.
-This repository is potentially public.
+Code in this repository is potentially public! Not intented to accept public
+contributions for the most part. Much of this will not work outside the IA
+cluster environment.
Archive-specific deployment/production guides and ansible scripts at:
-[journal-infra](https://git.archive.org/bnewbold/journal-infra)
+[journal-infra](https://git.archive.org/webgroup/journal-infra)
-## Python Setup
-Pretty much everything here uses python/pipenv. To setup your environment for
-this, and python in general:
+## Repository Layout
- # libjpeg-dev is for some wayback/pillow stuff
- sudo apt install -y python3-dev python3-pip python3-wheel libjpeg-dev build-essential
- pip3 install --user pipenv
+**./proposals/** design documentation and change proposals
-On macOS:
+**./python/** contains scripts and utilities for ingesting content from wayback
+and/or the web (via save-page-now API), and other processing pipelines
- brew install libjpeg pipenv
+**./sql/** contains schema, queries, and backfill scripts for a Postgres SQL
+database index (eg, file metadata, CDX, and GROBID status tables).
-Each directory has it's own environment. Do something like:
+**./pig/** contains a handful of Pig scripts, as well as some unittests
+implemented in python. Only rarely used.
- cd mapreduce
- pipenv install --dev
- pipenv shell
+**./scalding/** contains Hadoop jobs written in Scala using the Scalding
+framework. The intent is to write new non-trivial Hadoop jobs in Scala, which
+brings type safety and compiled performance. Mostly DEPRECATED.
-## Possible Issues with Setup
+**./python_hadoop/** contains Hadoop streaming jobs written in python using the
+`mrjob` library. Mostly DEPRECATED.
-Bryan had `~/.local/bin` in his `$PATH`, and that seemed to make everything
-work.
+
+## Running Python Code
+
+You need python3.8 (or python3.6+ and `pyenv`) and `pipenv` to set up the
+environment. You may also need the debian packages `libpq-dev` and `
+`python-dev` to install some dependencies.
+
+
+## Running Hadoop Jobs (DEPRECATED)
+
+The `./please` python3 wrapper script is a helper for running jobs (python or
+scalding) on the IA Hadoop cluster. You'll need to run the setup/dependency
+tasks first; see README files in subdirectories.