aboutsummaryrefslogtreecommitdiffstats
path: root/python/README.md
diff options
context:
space:
mode:
authorBryan Newbold <bnewbold@archive.org>2018-08-24 12:28:51 -0700
committerBryan Newbold <bnewbold@archive.org>2018-08-24 12:28:51 -0700
commit3782311e29b7e477e1936c89f55ff6483fd02e65 (patch)
treeb4484b9839f24d799d36881dfc85701ad888b94e /python/README.md
parent2a998189ef49976bf01cc95acc1f18a73e1f0ff6 (diff)
downloadsandcrawler-3782311e29b7e477e1936c89f55ff6483fd02e65.tar.gz
sandcrawler-3782311e29b7e477e1936c89f55ff6483fd02e65.zip
rename ./mapreduce to ./python
Diffstat (limited to 'python/README.md')
-rw-r--r--python/README.md74
1 files changed, 74 insertions, 0 deletions
diff --git a/python/README.md b/python/README.md
new file mode 100644
index 0000000..aebc160
--- /dev/null
+++ b/python/README.md
@@ -0,0 +1,74 @@
+
+Hadoop streaming map/reduce jobs written in python using the mrjob library.
+
+## Development and Testing
+
+System dependencies in addition to `../README.md`:
+
+- `libjpeg-dev` (for wayback libraries)
+
+Run the tests with:
+
+ pipenv run pytest
+
+Check test coverage with:
+
+ pytest --cov --cov-report html
+ # open ./htmlcov/index.html in a browser
+
+TODO: Persistant GROBID and HBase during development? Or just use live
+resources?
+
+## Extraction Task
+
+An example actually connecting to HBase from a local machine, with thrift
+running on a devbox and GROBID running on a dedicated machine:
+
+ ./extraction_cdx_grobid.py \
+ --hbase-table wbgrp-journal-extract-0-qa \
+ --hbase-host wbgrp-svc263.us.archive.org \
+ --grobid-uri http://wbgrp-svc096.us.archive.org:8070 \
+ tests/files/example.cdx
+
+Running from the cluster:
+
+ # Create tarball of virtualenv
+ export PIPENV_VENV_IN_PROJECT=1
+ pipenv shell
+ export VENVSHORT=`basename $VIRTUAL_ENV`
+ tar -czf $VENVSHORT.tar.gz -C /home/bnewbold/.local/share/virtualenvs/$VENVSHORT .
+
+ ./extraction_cdx_grobid.py \
+ --hbase-table wbgrp-journal-extract-0-qa \
+ --hbase-host wbgrp-svc263.us.archive.org \
+ --grobid-uri http://wbgrp-svc096.us.archive.org:8070 \
+ -r hadoop \
+ -c mrjob.conf \
+ --archive $VENVSHORT.tar.gz#venv \
+ hdfs:///user/bnewbold/journal_crawl_cdx/citeseerx_crawl_2017.cdx
+
+## Backfill Task
+
+An example actually connecting to HBase from a local machine, with thrift
+running on a devbox:
+
+ ./backfill_hbase_from_cdx.py \
+ --hbase-table wbgrp-journal-extract-0-qa \
+ --hbase-host wbgrp-svc263.us.archive.org \
+ tests/files/example.cdx
+
+Actual invocation to run on Hadoop cluster (running on an IA devbox, where
+hadoop environment is configured):
+
+ # Create tarball of virtualenv
+ export PIPENV_VENV_IN_PROJECT=1
+ pipenv install --deploy
+ tar -czf venv-current.tar.gz -C .venv .
+
+ ./backfill_hbase_from_cdx.py \
+ --hbase-host wbgrp-svc263.us.archive.org \
+ --hbase-table wbgrp-journal-extract-0-qa \
+ -r hadoop \
+ -c mrjob.conf \
+ --archive $VENVSHORT.tar.gz#venv \
+ hdfs:///user/bnewbold/journal_crawl_cdx/citeseerx_crawl_2017.cdx