aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--README.md37
-rw-r--r--python/README.md52
2 files changed, 46 insertions, 43 deletions
diff --git a/README.md b/README.md
index b322438..dcd7b11 100644
--- a/README.md
+++ b/README.md
@@ -6,34 +6,29 @@
\ooooooo| |___/\__,_|_| |_|\__,_|\___|_| \__,_| \_/\_/ |_|\___|_|
-This repo contains hadoop tasks (mapreduce and pig), luigi jobs, and other
-scripts and code for the internet archive (web group) journal ingest pipeline.
+This repo contains hadoop jobs, luigi tasks, and other scripts and code for the
+internet archive web group's journal ingest pipeline.
-This repository is potentially public.
+Code in tihs repository is potentially public!
Archive-specific deployment/production guides and ansible scripts at:
[journal-infra](https://git.archive.org/bnewbold/journal-infra)
-## Python Setup
+**./python/** contains Hadoop streaming jobs written in python using the
+`mrjob` library. Most notably, the **extraction** scripts, which fetch PDF
+files from wayback/petabox, process them with GROBID, and store the result in
+HBase.
-Pretty much everything here uses python/pipenv. To setup your environment for
-this, and python in general:
+**./scalding/** contains Hadoop jobs written in Scala using the Scalding
+framework. The intent is to write new non-trivial Hadoop jobs in Scala, which
+brings type safety and compiled performance.
- # libjpeg-dev is for some wayback/pillow stuff
- sudo apt install -y python3-dev python3-pip python3-wheel libjpeg-dev build-essential
- pip3 install --user pipenv
+**./pig/** contains a handful of Pig scripts, as well as some unittests
+implemented in python.
-On macOS:
+## Running Hadoop Jobs
- brew install libjpeg pipenv
+The `./please` python3 wrapper script is a helper for running jobs (python or
+scalding) on the IA Hadoop cluster. You'll need to run the setup/dependency
+tasks first; see README files in subdirectories.
-Each directory has it's own environment. Do something like:
-
- cd python
- pipenv install --dev
- pipenv shell
-
-## Possible Issues with Setup
-
-Bryan had `~/.local/bin` in his `$PATH`, and that seemed to make everything
-work.
diff --git a/python/README.md b/python/README.md
index aebc160..5c83908 100644
--- a/python/README.md
+++ b/python/README.md
@@ -3,9 +3,20 @@ Hadoop streaming map/reduce jobs written in python using the mrjob library.
## Development and Testing
-System dependencies in addition to `../README.md`:
+System dependencies on Linux (ubuntu/debian):
-- `libjpeg-dev` (for wayback libraries)
+ sudo apt install -y python3-dev python3-pip python3-wheel libjpeg-dev build-essential
+ pip3 install --user pipenv
+
+On macOS (using Homebrew):
+
+ brew install libjpeg pipenv
+
+You probably need `~/.local/bin` on your `$PATH`.
+
+Fetch all python dependencies with:
+
+ pipenv install --dev
Run the tests with:
@@ -16,10 +27,19 @@ Check test coverage with:
pytest --cov --cov-report html
# open ./htmlcov/index.html in a browser
-TODO: Persistant GROBID and HBase during development? Or just use live
-resources?
+## Running Python Jobs on Hadoop
+
+The `../please` script automates these steps; you should use that instead.
-## Extraction Task
+When running python streaming jobs on the actual hadoop cluster, we need to
+bundle along our python dependencies in a virtual env tarball. Building this
+tarball can be done like:
+
+ export PIPENV_VENV_IN_PROJECT=1
+ pipenv install --deploy
+ tar -czf venv-current.tar.gz -C .venv ."""
+
+### Extraction Task
An example actually connecting to HBase from a local machine, with thrift
running on a devbox and GROBID running on a dedicated machine:
@@ -30,13 +50,7 @@ running on a devbox and GROBID running on a dedicated machine:
--grobid-uri http://wbgrp-svc096.us.archive.org:8070 \
tests/files/example.cdx
-Running from the cluster:
-
- # Create tarball of virtualenv
- export PIPENV_VENV_IN_PROJECT=1
- pipenv shell
- export VENVSHORT=`basename $VIRTUAL_ENV`
- tar -czf $VENVSHORT.tar.gz -C /home/bnewbold/.local/share/virtualenvs/$VENVSHORT .
+Running from the cluster (once a ./venv-current.tar.gz tarball exists):
./extraction_cdx_grobid.py \
--hbase-table wbgrp-journal-extract-0-qa \
@@ -44,10 +58,10 @@ Running from the cluster:
--grobid-uri http://wbgrp-svc096.us.archive.org:8070 \
-r hadoop \
-c mrjob.conf \
- --archive $VENVSHORT.tar.gz#venv \
+ --archive venv-current.tar.gz#venv \
hdfs:///user/bnewbold/journal_crawl_cdx/citeseerx_crawl_2017.cdx
-## Backfill Task
+### Backfill Task
An example actually connecting to HBase from a local machine, with thrift
running on a devbox:
@@ -57,18 +71,12 @@ running on a devbox:
--hbase-host wbgrp-svc263.us.archive.org \
tests/files/example.cdx
-Actual invocation to run on Hadoop cluster (running on an IA devbox, where
-hadoop environment is configured):
-
- # Create tarball of virtualenv
- export PIPENV_VENV_IN_PROJECT=1
- pipenv install --deploy
- tar -czf venv-current.tar.gz -C .venv .
+Running from the cluster (once a ./venv-current.tar.gz tarball exists):
./backfill_hbase_from_cdx.py \
--hbase-host wbgrp-svc263.us.archive.org \
--hbase-table wbgrp-journal-extract-0-qa \
-r hadoop \
-c mrjob.conf \
- --archive $VENVSHORT.tar.gz#venv \
+ --archive venv-current.tar.gz#venv \
hdfs:///user/bnewbold/journal_crawl_cdx/citeseerx_crawl_2017.cdx