aboutsummaryrefslogtreecommitdiffstats
path: root/python/sandcrawler
Commit message (Collapse)AuthorAgeFilesLines
* html: start on SQL tableBryan Newbold2020-11-031-0/+44
|
* html: syntax fixes; resolve relative URLs; extract more XML fulltext URLsBryan Newbold2020-10-302-8/+15
|
* html: work around firstmonday DOCTYPE issueBryan Newbold2020-10-301-0/+3
|
* cdx datetime parsing improvementsBryan Newbold2020-10-301-0/+11
|
* cdx: add support for 'closest' time parameterBryan Newbold2020-10-301-3/+9
|
* html: more ingest improvementsBryan Newbold2020-10-302-18/+120
|
* html ingest: improve data flowBryan Newbold2020-10-291-18/+41
|
* better default CLI output (show usage)Bryan Newbold2020-10-291-1/+1
|
* misc: type annotations, fix parse_cdx_datetimeBryan Newbold2020-10-291-14/+18
|
* html: initial ingest implementationBryan Newbold2020-10-291-0/+193
|
* html: more biblio selectors; resource extractionBryan Newbold2020-10-291-0/+102
|
* HTML meta: more from online hunting/researchBryan Newbold2020-10-271-3/+54
|
* HTML metadata: fix type warningsBryan Newbold2020-10-271-1/+3
|
* start HTML metadata extraction codeBryan Newbold2020-10-271-0/+230
|
* ingest: skip JSTOR DOI prefixesBryan Newbold2020-10-231-0/+3
|
* Revert "reimplement worker timeout with multiprocessing"Bryan Newbold2020-10-221-17/+23
| | | | | | | This reverts commit 031f51752e79dbdde47bbc95fe6b3600c9ec711a. Didn't actually work when testing; can't pickle the Kafka Producer object (and probably other objects)
* ingest: decrease CDX timeout retries againBryan Newbold2020-10-221-1/+1
|
* reimplement worker timeout with multiprocessingBryan Newbold2020-10-221-23/+17
|
* ingest: fix WaybackContentError typoBryan Newbold2020-10-211-1/+1
|
* ingest: add a check for blocked-cookie before trying PDF url extractionBryan Newbold2020-10-211-0/+11
|
* differential wayback-error from wayback-content-errorBryan Newbold2020-10-217-18/+22
| | | | | | The motivation here is to distinguish errors due to current content in wayback (eg, in WARCs) from operational errors (eg, wayback machine is down, or network failures/disruption).
* ingest: add a cdx-error slowdown delayBryan Newbold2020-10-191-0/+3
|
* SPN CDX delay now seems reasonable; increase to 40sec to catch mostBryan Newbold2020-10-191-1/+1
|
* ingest: fix old_failure datetimeBryan Newbold2020-10-191-1/+1
|
* CDX: when retrying, do so every 3 seconds up to limitBryan Newbold2020-10-191-5/+9
|
* ingest: try SPNv2 for no-capture and old failuresBryan Newbold2020-10-191-1/+5
|
* SPN: more verbose status loggingBryan Newbold2020-10-191-0/+4
|
* ingest: disable soft404 and non-hit SPNv2 retriesBryan Newbold2020-10-191-4/+5
| | | | | | This might have made sense at some point, but I had forgotten about this code path and it makes no sense now. Has been resulting in very many extraneous SPN requests.
* CDX: revert post-SPN CDX lookup retry to 10 secondsBryan Newbold2020-10-191-1/+1
| | | | | Hoping to have many fewer SPN requests and issues, so willing to wait longer for each.
* ingest: catch wayback-fail-after-SPN as separate statusBryan Newbold2020-10-191-4/+17
|
* SPN: better log line when starting a requestBryan Newbold2020-10-191-0/+1
|
* SPN: look for non-200 CDX responsesBryan Newbold2020-10-191-1/+1
| | | | Suspect that this has been the source of many `spn2-cdx-lookup-failure`
* SPN: better check for partial URLs returnedBryan Newbold2020-10-191-2/+2
|
* CDX fetch: more permissive fuzzy/normalization checkBryan Newbold2020-10-191-3/+9
| | | | | | | This might the source of some `spn2-cdx-lookup-failure`. Wayback/CDX does this check via full-on SURT, with many more changes, and potentially we should be doing that here as well.
* ingest: experimentally reduce CDX API retry delayBryan Newbold2020-10-171-1/+1
| | | | | | | This code path is only working about 1/7 times in production. Going to try with a much shorter retry delay and see if we get no success with that. Considering also just disabling this attempt all together and relying on retries after hours/days.
* ingest: handle cookieAbsent and partial SPNv2 URL reponse cases betterBryan Newbold2020-10-171-0/+31
|
* and another sha1Bryan Newbold2020-10-131-0/+1
|
* another day, another bad PDF sha1Bryan Newbold2020-10-131-0/+1
|
* store no-capture URLs in terminal_urlBryan Newbold2020-10-122-3/+3
|
* another bad PDF sha1Bryan Newbold2020-10-111-0/+1
|
* yet more bad sha1 PDFs to skipBryan Newbold2020-10-101-0/+20
|
* ingest: small bugfix to print pdfextract status on SUCCESSBryan Newbold2020-09-171-1/+1
|
* more bad PDF sha1Bryan Newbold2020-09-171-0/+2
|
* yet another broken PDF (sha1)Bryan Newbold2020-09-161-0/+1
|
* html: handle JMIR URL patternBryan Newbold2020-09-151-0/+6
|
* skip citation_pdf_url if it is a link loopBryan Newbold2020-09-141-2/+8
| | | | This may help get around link-loop errors for a specific version of OJS
* html parse: add another generic fulltext patternBryan Newbold2020-09-141-1/+10
|
* ingest: treat text/xml as XHTML in pdf ingestBryan Newbold2020-09-141-1/+1
|
* more bad SHA1 PDFBryan Newbold2020-09-021-0/+2
|
* another bad PDF sha1Bryan Newbold2020-09-011-0/+1
|