blob: 99dd4f90b0b023205d8f74f161d0d94d8ed907e3 (
plain)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
|
Hadoop streaming map/reduce jobs written in python using the mrjob library.
## Development and Testing
System dependencies in addition to `../README.md`:
- `libjpeg-dev` (for wayback libraries)
Run the tests with:
pipenv run pytest
Check test coverage with:
pytest --cov --cov-report html
# open ./htmlcov/index.html in a browser
TODO: GROBID and HBase during development?
## Extraction Task
An example actually connecting to HBase from a local machine, with thrift
running on a devbox and GROBID running on a dedicated machine:
./extraction_cdx_grobid.py \
--hbase-table wbgrp-journal-extract-0-qa \
--hbase-host bnewbold-dev.us.archive.org \
--grobid-uri http://wbgrp-svc096.us.archive.org:8070
tests/files/example.cdx
## Backfill Task
An example actually connecting to HBase from a local machine, with thrift
running on a devbox:
./backfill_hbase_from_cdx.py \
--hbase-table wbgrp-journal-extract-0-qa \
--hbase-host bnewbold-dev.us.archive.org
tests/files/example.cdx
Actual invocation to run on Hadoop cluster (running on an IA devbox, where
hadoop environment is configured):
# Create tarball of virtualenv
pipenv shell
export VENVSHORT=`basename $VIRTUAL_ENV`
tar -czf $VENVSHORT.tar.gz -C /home/bnewbold/.local/share/virtualenvs/$VENVSHORT .
./backfill_hbase_from_cdx.py \
--hbase-host bnewbold-dev.us.archive.org \
--hbase-table wbgrp-journal-extract-0-qa \
-r hadoop \
-c mrjob.conf \
--archive $VENVSHORT#venv \
hdfs:///user/bnewbold/journal_crawl_cdx/citeseerx_crawl_2017.cdx
|