aboutsummaryrefslogtreecommitdiffstats
path: root/README.md
diff options
context:
space:
mode:
authorBryan Newbold <bnewbold@archive.org>2018-08-24 12:58:29 -0700
committerBryan Newbold <bnewbold@archive.org>2018-08-24 12:58:29 -0700
commit45cb944c9d64fc93e6d89865bd2b96a5eaa71521 (patch)
treebd4dc15ee52ee5c3b31919ba31c41f63d1fa32f0 /README.md
parent5cc0439668998de7b12badfc58d2fbad94c4e101 (diff)
downloadsandcrawler-45cb944c9d64fc93e6d89865bd2b96a5eaa71521.tar.gz
sandcrawler-45cb944c9d64fc93e6d89865bd2b96a5eaa71521.zip
update READMEs
Diffstat (limited to 'README.md')
-rw-r--r--README.md37
1 files changed, 16 insertions, 21 deletions
diff --git a/README.md b/README.md
index b322438..dcd7b11 100644
--- a/README.md
+++ b/README.md
@@ -6,34 +6,29 @@
\ooooooo| |___/\__,_|_| |_|\__,_|\___|_| \__,_| \_/\_/ |_|\___|_|
-This repo contains hadoop tasks (mapreduce and pig), luigi jobs, and other
-scripts and code for the internet archive (web group) journal ingest pipeline.
+This repo contains hadoop jobs, luigi tasks, and other scripts and code for the
+internet archive web group's journal ingest pipeline.
-This repository is potentially public.
+Code in tihs repository is potentially public!
Archive-specific deployment/production guides and ansible scripts at:
[journal-infra](https://git.archive.org/bnewbold/journal-infra)
-## Python Setup
+**./python/** contains Hadoop streaming jobs written in python using the
+`mrjob` library. Most notably, the **extraction** scripts, which fetch PDF
+files from wayback/petabox, process them with GROBID, and store the result in
+HBase.
-Pretty much everything here uses python/pipenv. To setup your environment for
-this, and python in general:
+**./scalding/** contains Hadoop jobs written in Scala using the Scalding
+framework. The intent is to write new non-trivial Hadoop jobs in Scala, which
+brings type safety and compiled performance.
- # libjpeg-dev is for some wayback/pillow stuff
- sudo apt install -y python3-dev python3-pip python3-wheel libjpeg-dev build-essential
- pip3 install --user pipenv
+**./pig/** contains a handful of Pig scripts, as well as some unittests
+implemented in python.
-On macOS:
+## Running Hadoop Jobs
- brew install libjpeg pipenv
+The `./please` python3 wrapper script is a helper for running jobs (python or
+scalding) on the IA Hadoop cluster. You'll need to run the setup/dependency
+tasks first; see README files in subdirectories.
-Each directory has it's own environment. Do something like:
-
- cd python
- pipenv install --dev
- pipenv shell
-
-## Possible Issues with Setup
-
-Bryan had `~/.local/bin` in his `$PATH`, and that seemed to make everything
-work.