blob: 5c8390817647355ec4209f2e99c5e61d692f8f74 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
|
Hadoop streaming map/reduce jobs written in python using the mrjob library.
## Development and Testing
System dependencies on Linux (ubuntu/debian):
sudo apt install -y python3-dev python3-pip python3-wheel libjpeg-dev build-essential
pip3 install --user pipenv
On macOS (using Homebrew):
brew install libjpeg pipenv
You probably need `~/.local/bin` on your `$PATH`.
Fetch all python dependencies with:
pipenv install --dev
Run the tests with:
pipenv run pytest
Check test coverage with:
pytest --cov --cov-report html
# open ./htmlcov/index.html in a browser
## Running Python Jobs on Hadoop
The `../please` script automates these steps; you should use that instead.
When running python streaming jobs on the actual hadoop cluster, we need to
bundle along our python dependencies in a virtual env tarball. Building this
tarball can be done like:
export PIPENV_VENV_IN_PROJECT=1
pipenv install --deploy
tar -czf venv-current.tar.gz -C .venv ."""
### Extraction Task
An example actually connecting to HBase from a local machine, with thrift
running on a devbox and GROBID running on a dedicated machine:
./extraction_cdx_grobid.py \
--hbase-table wbgrp-journal-extract-0-qa \
--hbase-host wbgrp-svc263.us.archive.org \
--grobid-uri http://wbgrp-svc096.us.archive.org:8070 \
tests/files/example.cdx
Running from the cluster (once a ./venv-current.tar.gz tarball exists):
./extraction_cdx_grobid.py \
--hbase-table wbgrp-journal-extract-0-qa \
--hbase-host wbgrp-svc263.us.archive.org \
--grobid-uri http://wbgrp-svc096.us.archive.org:8070 \
-r hadoop \
-c mrjob.conf \
--archive venv-current.tar.gz#venv \
hdfs:///user/bnewbold/journal_crawl_cdx/citeseerx_crawl_2017.cdx
### Backfill Task
An example actually connecting to HBase from a local machine, with thrift
running on a devbox:
./backfill_hbase_from_cdx.py \
--hbase-table wbgrp-journal-extract-0-qa \
--hbase-host wbgrp-svc263.us.archive.org \
tests/files/example.cdx
Running from the cluster (once a ./venv-current.tar.gz tarball exists):
./backfill_hbase_from_cdx.py \
--hbase-host wbgrp-svc263.us.archive.org \
--hbase-table wbgrp-journal-extract-0-qa \
-r hadoop \
-c mrjob.conf \
--archive venv-current.tar.gz#venv \
hdfs:///user/bnewbold/journal_crawl_cdx/citeseerx_crawl_2017.cdx
|