Commit message (Collapse) | Author | Age | Files | Lines | |
---|---|---|---|---|---|
* | and more bad sha1 | Bryan Newbold | 2020-08-06 | 1 | -0/+3 |
| | |||||
* | more pdfextract skip sha1hex | Bryan Newbold | 2020-08-06 | 1 | -9/+12 |
| | |||||
* | more bad PDF sha1; print sha1 before poppler extract | Bryan Newbold | 2020-08-05 | 1 | -0/+7 |
| | |||||
* | spn2: skip js behavior (experiment) | Bryan Newbold | 2020-08-05 | 1 | -0/+1 |
| | | | | | Hoping this will increase crawling throughput with little-to-no impact on fidelity. | ||||
* | SPN2: ensure not fetching outlinks | Bryan Newbold | 2020-08-05 | 1 | -0/+1 |
| | |||||
* | another bad PDF sha1 | Bryan Newbold | 2020-08-04 | 1 | -0/+1 |
| | |||||
* | another PDF sha1hex | Bryan Newbold | 2020-07-27 | 1 | -0/+1 |
| | |||||
* | yet another 'bad' PDF sha1hex | Bryan Newbold | 2020-07-27 | 1 | -0/+1 |
| | |||||
* | use new SPNv2 'skip_first_archive' param | Bryan Newbold | 2020-07-22 | 1 | -0/+1 |
| | | | | For speed and efficiency. | ||||
* | add more slow PDF hashes | Bryan Newbold | 2020-07-05 | 1 | -0/+2 |
| | |||||
* | add another bad PDF sha1hex | Bryan Newbold | 2020-07-02 | 1 | -0/+1 |
| | |||||
* | another bad PDF SHA-1 | Bryan Newbold | 2020-06-30 | 1 | -0/+1 |
| | |||||
* | hack to unblock thumbnail processing pipeline | Bryan Newbold | 2020-06-29 | 1 | -0/+16 |
| | | | | | | Some PDFs taking 10+ minutes to process, causing kafka exceptions and consumer churn. Not sure why kafka json pusher timeouts are not catching these. | ||||
* | customize timeout per worker; 120sec for pdf-extract | Bryan Newbold | 2020-06-29 | 3 | -2/+4 |
| | | | | | This is a stab-in-the-dark attempt to resolve long timeouts with this worker in prod. | ||||
* | handle empty fetched blob | Bryan Newbold | 2020-06-27 | 1 | -1/+6 |
| | |||||
* | CDX KeyError as WaybackError from fetch worker | Bryan Newbold | 2020-06-26 | 1 | -1/+1 |
| | |||||
* | handle None 'metadata' field correctly | Bryan Newbold | 2020-06-26 | 1 | -1/+1 |
| | |||||
* | handle non-success case of parsing extract from JSON/dict | Bryan Newbold | 2020-06-26 | 1 | -1/+1 |
| | |||||
* | report revisit non-200 as a WaybackError | Bryan Newbold | 2020-06-26 | 1 | -7/+7 |
| | |||||
* | Revert "simpler handling of null PDF text pages" | Bryan Newbold | 2020-06-25 | 1 | -4/+11 |
| | | | | | | This reverts commit 254f24ad6566c9d4b5814868911b604802847b58. Attribute was actually internal to text() call, not a None page. | ||||
* | simpler handling of null PDF text pages | Bryan Newbold | 2020-06-25 | 1 | -11/+4 |
| | |||||
* | pdfextract: attributerror with text extraction | Bryan Newbold | 2020-06-25 | 1 | -4/+12 |
| | |||||
* | catch UnicodeDecodeError in pdfextract | Bryan Newbold | 2020-06-25 | 1 | -1/+10 |
| | |||||
* | don't nest generic fetch errors under pdf_trio | Bryan Newbold | 2020-06-25 | 1 | -12/+6 |
| | | | | This came from sloppy refactoring (and missing test coverage) | ||||
* | pdfextract: handle too-large fulltext | Bryan Newbold | 2020-06-25 | 1 | -0/+17 |
| | |||||
* | another bad/non PDF test; catch correct error | Bryan Newbold | 2020-06-25 | 2 | -1/+6 |
| | | | | | | This test doesn't actually catch the error. I'm not sure why type checks don't discover the "LockedDocumentError not part of poppler" issue though. | ||||
* | pdfextract: catch poppler.LockedDocumentError | Bryan Newbold | 2020-06-25 | 1 | -1/+1 |
| | |||||
* | pdfextract support in ingest worker | Bryan Newbold | 2020-06-25 | 3 | -1/+66 |
| | |||||
* | poppler: correct RGBA buffer endian-ness | Bryan Newbold | 2020-06-25 | 2 | -2/+2 |
| | |||||
* | pdfextract_tool fixes from prod usage | Bryan Newbold | 2020-06-25 | 2 | -3/+6 |
| | |||||
* | fix tests for page0_height/width | Bryan Newbold | 2020-06-25 | 1 | -2/+2 |
| | |||||
* | pdfextract: fix pdf_extra key names | Bryan Newbold | 2020-06-25 | 1 | -2/+2 |
| | |||||
* | ensure pdf_meta isn't passed an empty dict() | Bryan Newbold | 2020-06-25 | 1 | -1/+4 |
| | |||||
* | args.kafka_env refactor didn't happen (yet) | Bryan Newbold | 2020-06-25 | 1 | -2/+2 |
| | |||||
* | s3-only mode persist workers use different consumer group | Bryan Newbold | 2020-06-25 | 1 | -2/+8 |
| | |||||
* | changes from prod | Bryan Newbold | 2020-06-25 | 2 | -4/+18 |
| | |||||
* | sandcrawler_worker: remove duplicate run_pdf_extract() | Bryan Newbold | 2020-06-25 | 1 | -29/+0 |
| | |||||
* | pdfextract worker | Bryan Newbold | 2020-06-25 | 1 | -1/+34 |
| | |||||
* | pdfextract: don't compress thumbnail output | Bryan Newbold | 2020-06-25 | 1 | -1/+1 |
| | |||||
* | pipenv: python-poppler 0.2.1 | Bryan Newbold | 2020-06-25 | 2 | -49/+51 |
| | |||||
* | fixes and tweaks from testing locally | Bryan Newbold | 2020-06-17 | 6 | -18/+134 |
| | |||||
* | fixes to pdfextract_tool | Bryan Newbold | 2020-06-17 | 1 | -12/+8 |
| | |||||
* | tweak kafka topic names and seaweedfs layout | Bryan Newbold | 2020-06-17 | 2 | -10/+12 |
| | |||||
* | make process_pdf() more robust to parse errors | Bryan Newbold | 2020-06-17 | 1 | -5/+29 |
| | |||||
* | note about text layout with pdf extraction | Bryan Newbold | 2020-06-17 | 1 | -0/+8 |
| | |||||
* | lint fixes | Bryan Newbold | 2020-06-17 | 4 | -7/+9 |
| | |||||
* | rename pdf tools to pdfextract | Bryan Newbold | 2020-06-17 | 3 | -0/+0 |
| | |||||
* | coverage-html makefile target | Bryan Newbold | 2020-06-17 | 1 | -0/+3 |
| | |||||
* | ignore PIL deprecation warnings | Bryan Newbold | 2020-06-17 | 1 | -0/+1 |
| | |||||
* | partial test coverage of pdf extract worker | Bryan Newbold | 2020-06-17 | 2 | -6/+70 |
| |