aboutsummaryrefslogtreecommitdiffstats
Commit message (Collapse)AuthorAgeFilesLines
...
* store no-capture URLs in terminal_urlBryan Newbold2020-10-123-3/+39
|
* start 2020-10 ingest notesBryan Newbold2020-10-111-0/+42
|
* update unpaywall 2020-04 notesBryan Newbold2020-10-111-0/+32
|
* OAI-PMH ingest progress timestampsBryan Newbold2020-10-111-0/+13
|
* another bad PDF sha1Bryan Newbold2020-10-111-0/+1
|
* yet more bad sha1 PDFs to skipBryan Newbold2020-10-101-0/+20
|
* notes on file_meta task (from august)Bryan Newbold2020-10-011-0/+66
|
* dump_file_meta helperBryan Newbold2020-10-011-0/+12
|
* update README (public)Bryan Newbold2020-10-011-17/+27
|
* ingest: small bugfix to print pdfextract status on SUCCESSBryan Newbold2020-09-171-1/+1
|
* more bad PDF sha1Bryan Newbold2020-09-171-0/+2
|
* yet another broken PDF (sha1)Bryan Newbold2020-09-161-0/+1
|
* html: handle JMIR URL patternBryan Newbold2020-09-151-0/+6
|
* updated sandcrawler-db statsBryan Newbold2020-09-152-6/+346
|
* skip citation_pdf_url if it is a link loopBryan Newbold2020-09-141-2/+8
| | | | This may help get around link-loop errors for a specific version of OJS
* html parse: add another generic fulltext patternBryan Newbold2020-09-141-1/+10
|
* ingest: treat text/xml as XHTML in pdf ingestBryan Newbold2020-09-141-1/+1
|
* OAI-PMH ingest notesBryan Newbold2020-09-031-0/+232
|
* daily ingest notesBryan Newbold2020-09-021-0/+202
|
* follow-up notes on processing 'holes'Bryan Newbold2020-09-021-0/+19
|
* unpaywall ingest follow-upBryan Newbold2020-09-021-0/+115
|
* more bad SHA1 PDFBryan Newbold2020-09-021-0/+2
|
* another bad PDF sha1Bryan Newbold2020-09-011-0/+1
|
* another bad PDF sha1Bryan Newbold2020-08-241-0/+1
|
* html: handle embed with mangled 'src' attributeBryan Newbold2020-08-241-1/+1
|
* WIP weekly re-ingest scriptBryan Newbold2020-08-172-0/+97
|
* another bad PDF sha1Bryan Newbold2020-08-171-0/+1
|
* another bad PDF sha1Bryan Newbold2020-08-151-0/+1
|
* more bad sha1Bryan Newbold2020-08-141-0/+1
|
* yet more bad PDF sha1Bryan Newbold2020-08-141-0/+2
|
* more bad SHA1Bryan Newbold2020-08-131-0/+2
|
* yet another PDF sha1Bryan Newbold2020-08-121-0/+1
|
* another bad sha1; maybe the last for this batch?Bryan Newbold2020-08-121-0/+1
|
* more bad sha1Bryan Newbold2020-08-111-0/+2
|
* additional loginwall patternsBryan Newbold2020-08-111-0/+2
|
* more SHA1Bryan Newbold2020-08-111-0/+2
|
* Revert "ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)"Bryan Newbold2020-08-111-1/+1
| | | | | | | This reverts commit 92bf9bc28ac0eacab2e06fa3b25b52f0882804c2. In practice, in prod, this resulted in much larger spn2-cdx-lookup-failure error rates.
* ingest: reduce CDX retry_sleep to 3.0 sec (after SPN)Bryan Newbold2020-08-111-1/+1
| | | | | | | | As we are moving towards just retrying entire ingest requests, we should probably just make this zero. But until then we should give SPN CDX a small chance to sync before giving up. This change expected to improve overall throughput.
* ingest: actually use force_get flag with SPNBryan Newbold2020-08-111-0/+13
| | | | | | The code path was there, but wasn't actually flagging in our most popular daily domains yet. Hopefully will make a big difference in SPN throughput.
* check for simple URL patterns that are usually paywalls or loginwallsBryan Newbold2020-08-112-0/+29
|
* ingest: check for URL blocklist and cookie URL patterns on every hopBryan Newbold2020-08-111-0/+13
|
* refactor: force_get -> force_simple_getBryan Newbold2020-08-112-8/+8
| | | | | For clarity. The SPNv2 API hasn't changed, just changing the variable/parameter name.
* html: extract eprints PDF url (eg, ub.uni-heidelberg.de)Bryan Newbold2020-08-111-0/+2
|
* extract PDF urls for e-periodica.chBryan Newbold2020-08-101-0/+6
|
* more bad sha1Bryan Newbold2020-08-101-0/+2
|
* another bad PDF sha1Bryan Newbold2020-08-101-0/+1
|
* add hkvalidate.perfdrive.com to domain blocklistBryan Newbold2020-08-081-0/+3
|
* fix tests passing str as HTMLBryan Newbold2020-08-081-3/+3
|
* add more HTML extraction tricksBryan Newbold2020-08-081-2/+29
|
* rwth-aachen.de HTML extract, and a generic URL guess methodBryan Newbold2020-08-081-0/+15
|