Commit message (Collapse) | Author | Age | Files | Lines | ||
---|---|---|---|---|---|---|
... | ||||||
* | | handle UnboundLocalError in HTML parsing | Bryan Newbold | 2020-05-19 | 1 | -1/+4 | |
| | | ||||||
* | | first iteration of oai2ingestrequest script | Bryan Newbold | 2020-05-05 | 1 | -0/+137 | |
| | | ||||||
* | | summarize datacite and MAG 2020 crawls | Bryan Newbold | 2020-05-05 | 2 | -0/+200 | |
| | | ||||||
* | | update sandcrawler stats for early may | Bryan Newbold | 2020-05-04 | 1 | -0/+418 | |
| | | ||||||
* | | hotfix for html meta extract codepath | Bryan Newbold | 2020-05-03 | 1 | -1/+1 | |
| | | | | | | | | Didn't test last commit before pushing; bad Bryan! | |||||
* | | ingest: handle partial citation_pdf_url tag | Bryan Newbold | 2020-05-03 | 1 | -0/+3 | |
| | | | | | | | | | | | | | | | | Eg: https://www.cureus.com/articles/29935-a-nomogram-for-the-rapid-prediction-of-hematocrit-following-blood-loss-and-fluid-shifts-in-neonates-infants-and-adults Has: <meta name="citation_pdf_url"/> | |||||
* | | workers: add missing want() dataflow path | Bryan Newbold | 2020-04-30 | 1 | -0/+9 | |
| | | ||||||
* | | ingest: don't 'want' non-PDF ingest | Bryan Newbold | 2020-04-30 | 1 | -0/+5 | |
| | | ||||||
* | | Merge branch 'bnewbold-worker-timeout' into 'master' | bnewbold | 2020-04-29 | 3 | -2/+58 | |
|\ \ | | | | | | | | | | | | | sandcrawler worker timeouts See merge request webgroup/sandcrawler!27 | |||||
| * | | timeouts: don't push through None error messages | Bryan Newbold | 2020-04-29 | 1 | -2/+2 | |
| | | | ||||||
| * | | timeout message implementation for GROBID and ingest workers | Bryan Newbold | 2020-04-27 | 2 | -0/+18 | |
| | | | ||||||
| * | | worker timeout wrapper, and use for kafka | Bryan Newbold | 2020-04-27 | 1 | -2/+40 | |
| | | | ||||||
* | | | NSQ for job task manager/scheduler | Bryan Newbold | 2020-04-28 | 1 | -0/+79 | |
| | | | ||||||
* | | | update MAG crawl notes | Bryan Newbold | 2020-04-28 | 1 | -0/+71 | |
|/ / | ||||||
* | | kafka: more reblance notes | Bryan Newbold | 2020-04-24 | 1 | -1/+14 | |
| | | ||||||
* | | CI: add missing libsnappy-dev and libsodium-dev system packages | Bryan Newbold | 2020-04-24 | 1 | -1/+1 | |
| | | | | | | | | Whack-a-mole here... | |||||
* | | kafka: how to rebalance parititions between brokers | Bryan Newbold | 2020-04-24 | 1 | -0/+29 | |
|/ | ||||||
* | CI: add deadsnakes and python3.7 | Bryan Newbold | 2020-04-21 | 1 | -2/+3 | |
| | ||||||
* | fix KeyError in HTML PDF URL extraction | Bryan Newbold | 2020-04-17 | 1 | -1/+1 | |
| | ||||||
* | persist: only GROBID updates file_meta, not file-result | Bryan Newbold | 2020-04-16 | 1 | -1/+1 | |
| | | | | | | | | | The hope here is to reduce deadlocks in production (on aitio). As context, we are only doing "updates" until the entire file_meta table is filled in with full metadata anyways; updates are wasteful of resources, and most inserts we have seen the file before, so should be doing "DO NOTHING" if the SHA1 is already in the table. | |||||
* | batch/multiprocess for ZipfilePusher | Bryan Newbold | 2020-04-16 | 2 | -5/+26 | |
| | ||||||
* | update README for python3.7 | Bryan Newbold | 2020-04-15 | 1 | -1/+1 | |
| | ||||||
* | pipenv: update to python3.7 | Bryan Newbold | 2020-04-15 | 2 | -197/+202 | |
| | ||||||
* | COVID-19 chinese paper ingest | Bryan Newbold | 2020-04-15 | 2 | -0/+156 | |
| | ||||||
* | 2020-04 unpaywall ingest (in progress) | Bryan Newbold | 2020-04-15 | 1 | -0/+63 | |
| | ||||||
* | 2020-04 datacite ingest (in progress) | Bryan Newbold | 2020-04-15 | 1 | -0/+18 | |
| | ||||||
* | partial notes on S2 crawl ingest | Bryan Newbold | 2020-04-15 | 1 | -0/+35 | |
| | ||||||
* | ingest: quick hack to capture CNKI outlinks | Bryan Newbold | 2020-04-13 | 1 | -2/+9 | |
| | ||||||
* | html: attempt at CNKI href extraction | Bryan Newbold | 2020-04-13 | 1 | -0/+11 | |
| | ||||||
* | MAG import notes | Bryan Newbold | 2020-04-13 | 1 | -0/+13 | |
| | ||||||
* | unpaywall2ingestrequest: canonicalize URL | Bryan Newbold | 2020-04-07 | 1 | -1/+9 | |
| | ||||||
* | MAG 2020-03-04 ingest notes to date | Bryan Newbold | 2020-04-06 | 1 | -0/+395 | |
| | ||||||
* | more monitoring queries | Bryan Newbold | 2020-03-30 | 1 | -5/+29 | |
| | ||||||
* | unpaywall ingest notes update | Bryan Newbold | 2020-03-30 | 1 | -0/+138 | |
| | ||||||
* | ia: set User-Agent for replay fetch from wayback | Bryan Newbold | 2020-03-29 | 1 | -0/+5 | |
| | | | | | | | Did this for all the other "client" helpers, but forgot to for wayback replay. Was starting to get "445" errors from wayback. | |||||
* | ingest: block another large domain (and DOI prefix) | Bryan Newbold | 2020-03-27 | 1 | -0/+2 | |
| | ||||||
* | ingest: better spn2 pending error code | Bryan Newbold | 2020-03-27 | 1 | -0/+2 | |
| | ||||||
* | ingest: eurosurveillance PDF parser | Bryan Newbold | 2020-03-25 | 1 | -0/+11 | |
| | ||||||
* | ia: more conservative use of clean_url() | Bryan Newbold | 2020-03-24 | 1 | -3/+5 | |
| | | | | | | Fixes AttributeError: 'NoneType' object has no attribute 'strip' Seen in production on the lookup_resource code path. | |||||
* | ingest: clean_url() in more places | Bryan Newbold | 2020-03-23 | 3 | -1/+6 | |
| | | | | | | Some 'cdx-error' results were due to URLs with ':' after the hostname or trailing newline ("\n") characters in the URL. This attempts to work around this categroy of error. | |||||
* | Merge branch 'martin-pubmed-ftp-topic-docs' into 'master' | bnewbold | 2020-03-20 | 1 | -1/+9 | |
|\ | | | | | | | | | topics: add pubmed ftp topic See merge request webgroup/sandcrawler!26 | |||||
| * | topics: add pubmed ftp topic | Martin Czygan | 2020-03-12 | 1 | -1/+9 | |
| | | | | | | | | PubmedFTPWorker replaced OAI recently. This documents the new topic. | |||||
* | | skip-db option also for worker | Bryan Newbold | 2020-03-19 | 1 | -0/+4 | |
| | | ||||||
* | | persist grobid: add option to skip S3 upload | Bryan Newbold | 2020-03-19 | 2 | -7/+14 | |
| | | | | | | | | | | | | | | Motivation for this is that current S3 target (minio) is overloaded, with too many files on a single partition (80 million+). Going to look in to seaweedfs and other options, but for now stopping minio persist. Data is all stored in kafka anyways. | |||||
* | | ingest: log every URL (from ia code side) | Bryan Newbold | 2020-03-18 | 1 | -0/+1 | |
| | | ||||||
* | | implement (unused) force_get flag for SPN2 | Bryan Newbold | 2020-03-18 | 2 | -4/+19 | |
| | | | | | | | | | | | | | | | | | | I hoped this feature would make it possible to crawl journals.lww.com PDFs, because the token URLs work with `wget`, but it still doesn't seem to work. Maybe because of user agent? Anyways, this feature might be useful for crawling efficiency, so adding to master. | |||||
* | | unpaywall large ingest notes | Bryan Newbold | 2020-03-17 | 1 | -0/+10 | |
| | | ||||||
* | | make monitoring commands ingest_request local, not ingest_file_result | Bryan Newbold | 2020-03-17 | 1 | -2/+2 | |
| | | ||||||
* | | work around local redirect (resource.location) | Bryan Newbold | 2020-03-17 | 1 | -1/+6 | |
| | | | | | | | | | | | | Some redirects are host-local. This patch crudely detects this (full-path redirects starting with "/" only), and appends the URL to the host of the original URL. | |||||
* | | Merge branch 'martin-abstract-class-process' into 'master' | bnewbold | 2020-03-12 | 1 | -0/+6 | |
|\ \ | | | | | | | | | | | | | workers: add explicit process to base class See merge request webgroup/sandcrawler!25 |