summaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--CHANGELOG.md2
-rw-r--r--TODO.md4
-rw-r--r--extra/elasticsearch/README.md2
-rw-r--r--extra/journal_metadata/README.md2
-rw-r--r--extra/sitemap/README.md4
-rw-r--r--python/README_import.md2
-rw-r--r--rust/HACKING.md2
-rw-r--r--rust/README.md2
-rw-r--r--rust/TODO2
9 files changed, 11 insertions, 11 deletions
diff --git a/CHANGELOG.md b/CHANGELOG.md
index 057f1afe..f7a6aaa4 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -29,7 +29,7 @@ file, may be reversed in API responses compared to what was returned
previously. They should not match what was original supplied when the entity
was created.
-In particular, this may cause broad discrepencies compared to historical bulk
+In particular, this may cause broad discrepancies compared to historical bulk
metadata exports. New bulk exports will be generated with the new ordering.
A number of content cleanups and changes are also taking place to the primary
diff --git a/TODO.md b/TODO.md
index 9538e7ed..d68236a9 100644
--- a/TODO.md
+++ b/TODO.md
@@ -92,7 +92,7 @@ Want to minimize edit counts, so will bundle a bunch of changes
- maybe better 'success' return message? eg, "success: true" flag
- idea: allow users to generate their own editgroup UUIDs, to reduce a round
trips and "hanging" editgroups (created but never edited)
-- refactor API schema for some entity-generic methos (eg, history, edit
+- refactor API schema for some entity-generic methods (eg, history, edit
operations) to take entity type as a URL path param. greatly reduce macro
foolery and method count/complexity, and ease creation of new entities
=> /{entity}/edit/{edit_id}
@@ -161,7 +161,7 @@ new importers:
convert JATS if necessary
- switch from slog to simple pretty_env_log
- format returned datetimes with only second precision, not millisecond (RFC mode)
- => burried in model serialization internals
+ => buried in model serialization internals
- refactor openapi schema to use shared response types
- consider using "HTTP 202: Accepted" for entity-mutating calls
- basic python hbase/elastic matcher
diff --git a/extra/elasticsearch/README.md b/extra/elasticsearch/README.md
index edb4f1f6..90019147 100644
--- a/extra/elasticsearch/README.md
+++ b/extra/elasticsearch/README.md
@@ -83,7 +83,7 @@ a new index and then cut over with no downtime.
http put :9200/fatcat_release_v03 < release_schema.json
-To replace a "real" index with an alias pointer, do two actions (not truely
+To replace a "real" index with an alias pointer, do two actions (not truly
zero-downtime, but pretty fast):
http delete :9200/fatcat_release
diff --git a/extra/journal_metadata/README.md b/extra/journal_metadata/README.md
index dec32624..cae52de3 100644
--- a/extra/journal_metadata/README.md
+++ b/extra/journal_metadata/README.md
@@ -2,7 +2,7 @@
This folder contains scripts to merge journal metadat from multiple sources and
provide a snapshot for bulk importing into fatcat.
-Specific bots will probably be needed to do continous updates; that's out of
+Specific bots will probably be needed to do continuous updates; that's out of
scope for this first import.
diff --git a/extra/sitemap/README.md b/extra/sitemap/README.md
index 581ee9f3..9f0dd4b0 100644
--- a/extra/sitemap/README.md
+++ b/extra/sitemap/README.md
@@ -37,8 +37,8 @@ In tree form:
Workflow:
-- run bash script over container dump, outputing compressed, sharded container sitemaps
-- run bash script over release work-grouped, outputing compressed, sharded release sitemaps
+- run bash script over container dump, outputting compressed, sharded container sitemaps
+- run bash script over release work-grouped, outputting compressed, sharded release sitemaps
- run python script to output top-level `sitemap.xml`
- `scp` all of this into place
diff --git a/python/README_import.md b/python/README_import.md
index 74e75e14..1d54f9d7 100644
--- a/python/README_import.md
+++ b/python/README_import.md
@@ -140,7 +140,7 @@ Takes a few hours.
## dblp
See `extra/dblp/README.md` for notes about first importing container metadata
-and getting a TSV mapping flie to help with import. This is needed because
+and getting a TSV mapping file to help with import. This is needed because
there is not (yet) a lookup mechanism for `dblp_prefix` as an identifier of
container entities.
diff --git a/rust/HACKING.md b/rust/HACKING.md
index c321cded..fbdeb499 100644
--- a/rust/HACKING.md
+++ b/rust/HACKING.md
@@ -26,7 +26,7 @@ are verbose and implemented in a very mechanical fashion. The return type
mapping in `api_wrappers` might be necessary, but `database_models.rs` in
particular feels unnecessary; other projects have attempted to completely
automate generation of this file, but it doesn't sound reliable. In particular,
-both regular "Row" (queriable) and "NewRow" (insertable) structs need to be
+both regular "Row" (queryable) and "NewRow" (insertable) structs need to be
defined.
## Test Structure
diff --git a/rust/README.md b/rust/README.md
index 6f213629..36061240 100644
--- a/rust/README.md
+++ b/rust/README.md
@@ -71,7 +71,7 @@ All configuration goes through environment variables, the notable ones being:
- `TEST_DATABASE_URL`: used when running `cargo test`
- `AUTH_LOCATION`: the domain authentication tokens should be valid over
- `AUTH_KEY_IDENT`: a unique name for the primary auth signing key (used to
- find the correct key after key rotation has occured)
+ find the correct key after key rotation has occurred)
- `AUTH_SECRET_KEY`: base64-encoded secret key used to both sign and verify
authentication tokens (symmetric encryption)
- `AUTH_ALT_KEYS`: additional ident/key pairs that can be used to verify tokens
diff --git a/rust/TODO b/rust/TODO
index 9a6ea910..1baff6ea 100644
--- a/rust/TODO
+++ b/rust/TODO
@@ -28,7 +28,7 @@ later:
https://github.com/jkcclemens/paste/blob/942d1ede8abe80a594553197f2b03c1d6d70efd0/webserver/build.rs
https://github.com/jkcclemens/paste/blob/942d1ede8abe80a594553197f2b03c1d6d70efd0/webserver/src/main.rs#L44
- "prev_rev" required in updates
-- tried using sync::Once to wrap test database initilization (so it would only
+- tried using sync::Once to wrap test database initialization (so it would only
run migrations once), but it didn't seem to work, maybe I had a bug or it
didn't compile?
=> could also do a global mutex: https://github.com/SergioBenitez/Rocket/issues/697