1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
|
Using 2021-06-07 upstream MAG snapshot to run a crawl and do some re-ingest.
Also want to re-ingest some old/failed ingests, now that pipeline/code has
improved.
Ran munging from `scratch:ingest/mag` notes first. Yielded 22.5M PDF URLs.
## Persist Ingest Requests
zcat /srv/sandcrawler/tasks/ingest_requests_mag-2021-06-07.json.gz | head -n1000 | pv -l | ./persist_tool.py ingest-request -
=> Worker: Counter({'total': 1000, 'insert-requests': 276, 'update-requests': 0})
=> JSON lines pushed: Counter({'total': 1000, 'pushed': 1000})
zcat /srv/sandcrawler/tasks/ingest_requests_mag-2021-06-07.json.gz | pv -l | ./persist_tool.py ingest-request -
=> 22.5M 0:46:00 [8.16k/s]
=> Worker: Counter({'total': 22527585, 'insert-requests': 8686315, 'update-requests': 0})
=> JSON lines pushed: Counter({'total': 22527585, 'pushed': 22527585})
Roughly 8.6 million new URLs
## Pre-Crawl Status Counts
Status of combined old and new requests, with some large domains removed:
SELECT ingest_file_result.status, COUNT(*)
FROM ingest_request
LEFT JOIN ingest_file_result
ON ingest_file_result.ingest_type = ingest_request.ingest_type
AND ingest_file_result.base_url = ingest_request.base_url
WHERE
ingest_request.ingest_type = 'pdf'
AND ingest_request.link_source = 'mag'
AND ingest_request.base_url NOT LIKE '%journals.sagepub.com%'
AND ingest_request.base_url NOT LIKE '%pubs.acs.org%'
AND ingest_request.base_url NOT LIKE '%ahajournals.org%'
AND ingest_request.base_url NOT LIKE '%www.journal.csj.jp%'
AND ingest_request.base_url NOT LIKE '%aip.scitation.org%'
AND ingest_request.base_url NOT LIKE '%academic.oup.com%'
AND ingest_request.base_url NOT LIKE '%tandfonline.com%'
AND ingest_request.base_url NOT LIKE '%researchgate.net%'
AND ingest_request.base_url NOT LIKE '%muse.jhu.edu%'
AND ingest_request.base_url NOT LIKE '%omicsonline.org%'
AND ingest_request.base_url NOT LIKE '%link.springer.com%'
AND ingest_request.base_url NOT LIKE '%ieeexplore.ieee.org%'
-- AND ingest_request.created > '2021-06-01'
GROUP BY status
ORDER BY COUNT DESC
LIMIT 20;
status | count
-------------------------------+----------
success | 26123975
| 6664846
no-pdf-link | 1859908
redirect-loop | 1532405
no-capture | 1199126
link-loop | 1157010
terminal-bad-status | 832362
gateway-timeout | 202158
spn2-cdx-lookup-failure | 81406
wrong-mimetype | 69087
invalid-host-resolution | 37262
wayback-error | 21340
petabox-error | 11237
null-body | 9414
wayback-content-error | 2199
cdx-error | 1893
spn2-error | 1741
spn2-error:job-failed | 971
blocked-cookie | 902
spn2-error:invalid-url-syntax | 336
(20 rows)
And just the new URLs (note that domain filter shouldn't be required, but
keeping for consistency):
SELECT ingest_file_result.status, COUNT(*)
FROM ingest_request
LEFT JOIN ingest_file_result
ON ingest_file_result.ingest_type = ingest_request.ingest_type
AND ingest_file_result.base_url = ingest_request.base_url
WHERE
ingest_request.ingest_type = 'pdf'
AND ingest_request.link_source = 'mag'
AND ingest_request.base_url NOT LIKE '%journals.sagepub.com%'
AND ingest_request.base_url NOT LIKE '%pubs.acs.org%'
AND ingest_request.base_url NOT LIKE '%ahajournals.org%'
AND ingest_request.base_url NOT LIKE '%www.journal.csj.jp%'
AND ingest_request.base_url NOT LIKE '%aip.scitation.org%'
AND ingest_request.base_url NOT LIKE '%academic.oup.com%'
AND ingest_request.base_url NOT LIKE '%tandfonline.com%'
AND ingest_request.base_url NOT LIKE '%researchgate.net%'
AND ingest_request.base_url NOT LIKE '%muse.jhu.edu%'
AND ingest_request.base_url NOT LIKE '%omicsonline.org%'
AND ingest_request.base_url NOT LIKE '%link.springer.com%'
AND ingest_request.base_url NOT LIKE '%ieeexplore.ieee.org%'
AND ingest_request.created > '2021-06-01'
GROUP BY status
ORDER BY COUNT DESC
LIMIT 20;
status | count
-------------------------+---------
| 6664780
success | 1957844
redirect-loop | 23357
terminal-bad-status | 9385
no-pdf-link | 8315
no-capture | 6892
link-loop | 4517
wrong-mimetype | 3864
cdx-error | 1749
blocked-cookie | 842
null-body | 747
wayback-error | 688
wayback-content-error | 570
gateway-timeout | 367
petabox-error | 340
spn2-cdx-lookup-failure | 150
read-timeout | 122
not-found | 119
invalid-host-resolution | 63
spn2-error | 23
(20 rows)
## Dump Initial Bulk Ingest Requests
Note that this is all-time, not just recent, and will re-process a lot of
"no-pdf-link":
COPY (
SELECT row_to_json(ingest_request.*) FROM ingest_request
LEFT JOIN ingest_file_result
ON ingest_file_result.ingest_type = ingest_request.ingest_type
AND ingest_file_result.base_url = ingest_request.base_url
WHERE
ingest_request.ingest_type = 'pdf'
AND ingest_request.link_source = 'mag'
AND (
ingest_file_result.status IS NULL
OR ingest_file_result.status = 'no-pdf-link'
OR ingest_file_result.status = 'cdx-error'
)
AND ingest_request.base_url NOT LIKE '%journals.sagepub.com%'
AND ingest_request.base_url NOT LIKE '%pubs.acs.org%'
AND ingest_request.base_url NOT LIKE '%ahajournals.org%'
AND ingest_request.base_url NOT LIKE '%www.journal.csj.jp%'
AND ingest_request.base_url NOT LIKE '%aip.scitation.org%'
AND ingest_request.base_url NOT LIKE '%academic.oup.com%'
AND ingest_request.base_url NOT LIKE '%tandfonline.com%'
AND ingest_request.base_url NOT LIKE '%researchgate.net%'
AND ingest_request.base_url NOT LIKE '%muse.jhu.edu%'
AND ingest_request.base_url NOT LIKE '%omicsonline.org%'
AND ingest_request.base_url NOT LIKE '%link.springer.com%'
AND ingest_request.base_url NOT LIKE '%ieeexplore.ieee.org%'
) TO '/srv/sandcrawler/tasks/mag_ingest_request_2021-08-03.rows.json';
=> COPY 8526647
Transform to ingest requests:
./scripts/ingestrequest_row2json.py /srv/sandcrawler/tasks/mag_ingest_request_2021-08-03.rows.json | pv -l | shuf > /srv/sandcrawler/tasks/mag_ingest_request_2021-08-03.ingest_request.json
=> 8.53M 0:03:40
Enqueue the whole batch:
cat /srv/sandcrawler/tasks/mag_ingest_request_2021-08-03.ingest_request.json | rg -v "\\\\" | jq . -c | kafkacat -P -b wbgrp-svc263.us.archive.org -t sandcrawler-prod.ingest-file-requests-bulk -p -1
=> DONE
Updated stats after running initial bulk ingest:
SELECT ingest_file_result.status, COUNT(*)
FROM ingest_request
LEFT JOIN ingest_file_result
ON ingest_file_result.ingest_type = ingest_request.ingest_type
AND ingest_file_result.base_url = ingest_request.base_url
WHERE
ingest_request.ingest_type = 'pdf'
AND ingest_request.link_source = 'mag'
AND ingest_request.base_url NOT LIKE '%journals.sagepub.com%'
AND ingest_request.base_url NOT LIKE '%pubs.acs.org%'
AND ingest_request.base_url NOT LIKE '%ahajournals.org%'
AND ingest_request.base_url NOT LIKE '%www.journal.csj.jp%'
AND ingest_request.base_url NOT LIKE '%aip.scitation.org%'
AND ingest_request.base_url NOT LIKE '%academic.oup.com%'
AND ingest_request.base_url NOT LIKE '%tandfonline.com%'
AND ingest_request.base_url NOT LIKE '%researchgate.net%'
AND ingest_request.base_url NOT LIKE '%muse.jhu.edu%'
AND ingest_request.base_url NOT LIKE '%omicsonline.org%'
AND ingest_request.base_url NOT LIKE '%link.springer.com%'
AND ingest_request.base_url NOT LIKE '%ieeexplore.ieee.org%'
AND ingest_request.created > '2021-06-01'
GROUP BY status
ORDER BY COUNT DESC
LIMIT 20;
status | count
-------------------------+---------
success | 5184994
no-capture | 3284416
redirect-loop | 98685
terminal-bad-status | 28733
link-loop | 28518
blocked-cookie | 22338
no-pdf-link | 19073
wrong-mimetype | 9122
null-body | 2793
wayback-error | 2128
wayback-content-error | 1233
cdx-error | 1198
petabox-error | 617
gateway-timeout | 395
not-found | 130
read-timeout | 128
| 111
invalid-host-resolution | 63
spn2-cdx-lookup-failure | 24
spn2-error | 20
(20 rows)
## Generate Seedlist
For crawling, do a similar (but not identical) dump:
COPY (
SELECT row_to_json(t1.*)
FROM (
SELECT ingest_request.*, ingest_file_result as result
FROM ingest_request
LEFT JOIN ingest_file_result
ON ingest_file_result.ingest_type = ingest_request.ingest_type
AND ingest_file_result.base_url = ingest_request.base_url
WHERE
ingest_request.ingest_type = 'pdf'
AND ingest_request.link_source = 'mag'
AND (
ingest_file_result.status IS NULL
OR ingest_file_result.status = 'no-capture'
OR ingest_file_result.status = 'cdx-error'
OR ingest_file_result.status = 'wayback-error'
OR ingest_file_result.status = 'wayback-content-error'
OR ingest_file_result.status = 'petabox-error'
OR ingest_file_result.status = 'spn2-cdx-lookup-failure'
)
AND ingest_request.base_url NOT LIKE '%journals.sagepub.com%'
AND ingest_request.base_url NOT LIKE '%pubs.acs.org%'
AND ingest_request.base_url NOT LIKE '%ahajournals.org%'
AND ingest_request.base_url NOT LIKE '%www.journal.csj.jp%'
AND ingest_request.base_url NOT LIKE '%aip.scitation.org%'
AND ingest_request.base_url NOT LIKE '%academic.oup.com%'
AND ingest_request.base_url NOT LIKE '%tandfonline.com%'
AND ingest_request.base_url NOT LIKE '%researchgate.net%'
AND ingest_request.base_url NOT LIKE '%muse.jhu.edu%'
AND ingest_request.base_url NOT LIKE '%omicsonline.org%'
AND ingest_request.base_url NOT LIKE '%link.springer.com%'
AND ingest_request.base_url NOT LIKE '%ieeexplore.ieee.org%'
) t1
) TO '/srv/sandcrawler/tasks/mag_ingest_request_2021-08-11.rows.json';
=> COPY 4599519
Prep ingest requests (for post-crawl use):
./scripts/ingestrequest_row2json.py /srv/sandcrawler/tasks/mag_ingest_request_2021-08-11.rows.json | pv -l > /srv/sandcrawler/tasks/mag_ingest_request_2021-08-11.ingest_request.json
=> 4.60M 0:02:55 [26.2k/s]
And actually dump seedlist(s):
cat /srv/sandcrawler/tasks/mag_ingest_request_2021-08-11.rows.json | jq -r .base_url | sort -u -S 4G > /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.base_url.txt
cat /srv/sandcrawler/tasks/mag_ingest_request_2021-08-11.rows.json | rg '"no-capture"' | jq -r .result.terminal_url | rg -v ^null$ | sort -u -S 4G > /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.terminal_url.txt
cat /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.terminal_url.txt /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.base_url.txt | sort -u -S 4G > /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.combined.txt
=> DONE
wc -l /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.*.txt
4593238 /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.base_url.txt
4632911 /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.combined.txt
3294710 /srv/sandcrawler/tasks/mag_seedlist_2021-08-11.terminal_url.txt
|