aboutsummaryrefslogtreecommitdiffstats
diff options
context:
space:
mode:
-rw-r--r--notes/database_dumps_backups.txt31
-rw-r--r--rust/HACKING.md24
-rw-r--r--rust/INSTALL.md36
-rw-r--r--rust/README.md85
4 files changed, 92 insertions, 84 deletions
diff --git a/notes/database_dumps_backups.txt b/notes/database_dumps_backups.txt
new file mode 100644
index 00000000..0b05b9b8
--- /dev/null
+++ b/notes/database_dumps_backups.txt
@@ -0,0 +1,31 @@
+
+## Dumps and Backups
+
+There are a few different database dump formats folks might want:
+
+- raw native database backups, for disaster recovery (would include
+ volatile/unsupported schema details, user API credentials, full history,
+ in-process edits, comments, etc)
+- a sanitized version of the above: roughly per-table dumps of the full state
+ of the database. Could use per-table SQL expressions with sub-queries to pull
+ in small tables ("partial transform") and export JSON for each table; would
+ be extra work to maintain, so not pursuing for now.
+- full history, full public schema exports, in a form that might be used to
+ mirror or enitrely fork the project. Propose supplying the full "changelog"
+ in API schema format, in a single file to capture all entity history, without
+ "hydrating" any inter-entity references. Rely on separate dumps of
+ non-entity, non-versioned tables (editors, abstracts, etc). Note that a
+ variant of this could use the public interface, in particular to do
+ incremental updates (though that wouldn't capture schema changes).
+- transformed exports of the current state of the database (aka, without
+ history). Useful for data analysis, search engines, etc. Propose supplying
+ just the Release table in a fully "hydrated" state to start. Unclear if
+ should be on a work or release basis; will go with release for now. Harder to
+ do using public interface because of the need for transaction locking.
+
+Backing up the entire database using `pg_dump`, with parallelism 1 (use more on
+larger machine with fast disks; try 4 or 8?), assuming the database name is
+'fatcat', and the current user has access:
+
+ pg_dump -j1 -Fd -f test-dump fatcat
+
diff --git a/rust/HACKING.md b/rust/HACKING.md
new file mode 100644
index 00000000..a399164c
--- /dev/null
+++ b/rust/HACKING.md
@@ -0,0 +1,24 @@
+
+## Updating Schemas
+
+Regenerate API schemas after editing the fatcat-openapi2 schema. This will, as
+a side-effect, also run `cargo fmt` on the whole project, so don't run it with
+your editor open!
+
+ cargo install cargo-swagger # uses docker
+ ./codegen_openapi2.sh
+
+Update Rust database schema (after changing raw SQL schema):
+
+ diesel database reset
+ diesel print-schema > src/database_schema.rs
+
+Debug SQL schema errors (if diesel commands fail):
+
+ psql fatcat_test < migrations/2018-05-12-001226_init/up.sql
+
+## Direct API Interaction
+
+Creating entities via API:
+
+ http --json post localhost:9411/v0/container name=asdf issn=1234-5678
diff --git a/rust/INSTALL.md b/rust/INSTALL.md
new file mode 100644
index 00000000..c2b86c51
--- /dev/null
+++ b/rust/INSTALL.md
@@ -0,0 +1,36 @@
+
+Canonical IA production/QA ansible scripts are in the journal-infra repo. These
+directions are likely to end up out-of-date.
+
+## Simple Deployment
+
+To install manually, on a bare server, as root:
+
+ adduser fatcat
+ apt install postgresql-9.6 postgresql-contrib postgresql-client-9.6 \
+ nginx build-essential git pkg-config libssl-dev libpq-dev \
+ htop screen
+ mkdir -p /srv/fatcat
+ chown fatcat:fatcat /srv/fatcat
+
+ # setup new postgres user
+ su - postgres
+ createuser -P -s fatcat # strong random password
+ # DELETE: createdb fatcat
+
+ # as fatcat user
+ su - fatcat
+ ssh-keygen
+ curl https://sh.rustup.rs -sSf | sh
+ source $HOME/.cargo/env
+ cargo install diesel_cli --no-default-features --features "postgres"
+ cd /srv/fatcat
+ git clone git@git.archive.org:webgroup/fatcat
+ cd rust
+ cargo build
+ echo "DATABASE_URL=postgres://fatcat@localhost/fatcat" > .env
+ diesel database reset
+
+ # as fatcat, in a screen or something
+ cd /srv/fatcat/fatcat/rust
+ cargo run
diff --git a/rust/README.md b/rust/README.md
index a6873345..c061a1f9 100644
--- a/rust/README.md
+++ b/rust/README.md
@@ -29,87 +29,4 @@ Tests:
cargo test -- --test-threads 1
-## Simple Deployment
-
-Canonical ansible scripts are in the journal-infra repo. To install manually,
-on a bare server, as root:
-
- adduser fatcat
- apt install postgresql-9.6 postgresql-contrib postgresql-client-9.6 \
- nginx build-essential git pkg-config libssl-dev libpq-dev \
- htop screen
- mkdir -p /srv/fatcat
- chown fatcat:fatcat /srv/fatcat
-
- # setup new postgres user
- su - postgres
- createuser -P -s fatcat # strong random password
- # DELETE: createdb fatcat
-
- # as fatcat user
- su - fatcat
- ssh-keygen
- curl https://sh.rustup.rs -sSf | sh
- source $HOME/.cargo/env
- cargo install diesel_cli --no-default-features --features "postgres"
- cd /srv/fatcat
- git clone git@git.archive.org:webgroup/fatcat
- cd rust
- cargo build
- echo "DATABASE_URL=postgres://fatcat@localhost/fatcat" > .env
- diesel database reset
-
- # as fatcat, in a screen or something
- cd /srv/fatcat/fatcat/rust
- cargo run
-
-### Dumps and Backups
-
-There are a few different databaase dump formats folks might want:
-
-- raw native database backups, for disaster recovery (would include
- volatile/unsupported schema details, user API credentials, full history,
- in-process edits, comments, etc)
-- a sanitized version of the above: roughly per-table dumps of the full state
- of the database. Could use per-table SQL expressions with sub-queries to pull
- in small tables ("partial transform") and export JSON for each table; would
- be extra work to maintain, so not pursuing for now.
-- full history, full public schema exports, in a form that might be used to
- mirror or enitrely fork the project. Propose supplying the full "changelog"
- in API schema format, in a single file to capture all entity history, without
- "hydrating" any inter-entity references. Rely on separate dumps of
- non-entity, non-versioned tables (editors, abstracts, etc). Note that a
- variant of this could use the public interface, in particular to do
- incremental updates (though that wouldn't capture schema changes).
-- transformed exports of the current state of the database (aka, without
- history). Useful for data analysis, search engines, etc. Propose supplying
- just the Release table in a fully "hydrated" state to start. Unclear if
- should be on a work or release basis; will go with release for now. Harder to
- do using public interface because of the need for transaction locking.
-
-Backing up the entire database using `pg_dump`, with parallelism 1 (use more on
-larger machine with fast disks; try 4 or 8?), assuming the database name is
-'fatcat', and the current user has access:
-
- pg_dump -j1 -Fd -f test-dump fatcat
-
-### Special Tricks
-
-Regenerate API schemas (this will, as a side-effect, also run `cargo fmt` on
-the whole project, so don't run it with your editor open):
-
- cargo install cargo-swagger # uses docker
- ./codegen_openapi2.sh
-
-Regenerate SQL schema:
-
- diesel database reset
- diesel print-schema > src/database_schema.rs
-
-Debugging SQL schema errors:
-
- psql fatcat_test < migrations/2018-05-12-001226_init/up.sql
-
-Creating entities via API:
-
- http --json post localhost:9411/v0/container name=asdf issn=1234-5678
+See `HACKING` for some more advanced tips and commands.