1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
|
PDS proof of concept:
x ipld sqlite driver importing CAR file
=> simple binary, two args
x skeleton
x env config: DB paths, port
x commands: serve, import, inspect
x integration test
x atp db wrapper (with methods)
schema in a single .sql file
https://docs.rs/rusqlite_migration/latest/rusqlite_migration/
test version (in memory, per-thread)
wrap in a mutex, unwrap and make new connection when desired
x wrap both database in a struct with mutexes; have "get handle" helper that unlocks and returns a connection copy of the given type
x repo store database wrapper (with methods)
x response error handling (especially for XRPC endpoints)
- basic crypto and did:plc stuff
did:key read/write helpers
signature read/write helpers
test that did:plc generated as expected
- MST code to read and mutate tree state
=> just read the whole tree and then write the whole tree
=> check that empty tree works (eg, for account creation, and after deletes)
=> with in-memory tests
- service-level config
domain suffixes (eg, just ".test" for now)
account registration allowed or not
CLI account creation (?)
PDS signing key
-
- implement basic non-authenticated CRUD on repository, test with CLI
com.atproto
createAccount
repoDescribe
repoGetRecord
repoListRecords
repoBatchWrite
repoCreateRecord
repoPutRecord
repoDeleteRecord
syncGetRoot
syncGetRepo
syncUpdateRepo
- single shared signing key for all users (not what I expected)
- helper web methods
xrpc_wrap<S: Serialize>(resp: Result<S>) -> Response
xrpc_get_atproto(endpoint: &str, req) -> Result<Value>
xrpc_post_atproto(endpoint: &str, req) -> Result<Value>
xrpc_wrap(xrpc_get_atproto(srv, "asdf", req))
? python test script
- sqlite schema (for application)
- write wrapper which updates MST *and* updates other tables in a transaction
- JSON schema type generation (separate crate?)
- HTTP API handler implementing many endpoints
com.atproto
createSession
getAccountsConfig
getSession
resolveName
app.bsky
getHomeFeed
getAuthorFeed
getLikedBy
getNotificationCount
getNotifications
getPostThread
getProfile
getRepostedBy
getUserFollowers
getUserFollows
getUsersSearch
postNotificationsSeen
updateProfile
- did:web handler?
other utils/helpers:
- pack/unpack a repo CAR into JSON files in a directory tree (plus a commit.json with sig?)
libraries:
- `jsonschema` to validate requests and records (rich validation)
- `schemafy` to codegen serde types for records (ahead of time?)
- pretty_env_logger
- no good published crate for working with CAR files... could rip out this code?
https://github.com/n0-computer/iroh/tree/main/iroh-car
- ??? for CBOR (de)serialization of MST, separate from the IPLD stuff?
sync option:
- `rouille` web framework
- `rusqlite` with "bundled" sqlite for datastore
- `rusqlite_migration`
- `ipfs-sqlite-block-store` and `libipld` to parse and persist repo content
async option:
- `warp` as async HTTP service
- `sqlx` for async pooled sqlite or postgresql db
- `iroh-store` for async rocksdb IPFS blockstore
## concurrency (in warp app)
note that there isn't really any point in having this be async, given that we
just have a single shared sqlite on disk. could try `rouille` instead of
`warp`?
maybe good for remote stuff like did:web resolution?
could try using sqlx instead of rusqlite for natively-async database stuff?
for block store:
- open a single connection at startup, store in mutex
- handlers get a reference to mutex. if they need a connection, they enter a blocking thread then:
block on the mutex, then create a new connection, unlock the mutex
do any operations on connection synchronously
exit the block
## system tables
account
did (PK)
username (UNIQUE, indexed)
email (UNIQUE)
password_bcrypt
signing_key
did_doc
did (PK)
seen_at (timestamp)
session
did
jwt
???
repo
did
head_commit
record (should this exist? good for queries)
did
collection
tid
record_cid
record_cbor (CBOR bytes? JSON?)
password_reset
did
token
## atp tables
what actually needs to be indexed?
- post replies (forwards and backwards
- likes (back index)
- follows (back index)
- usernames (as part of profile?)
- mentions? hashtags?
additional state
- notifications
bsky_post
did
tid (or timestamp from tid?)
text
reply_root (nullable)
reply_parent (nullable)
entities: JSON (?)
bsky_profile
did
tid
display_name
description
my_state (JSON)
bsky_follow
did
tid
target_did
bsky_like
did
tid
target_uri
target_cid (what is this? the commit, or record CID?)
bsky_repost
did
tid
target_uri
target_cid
bsky_notification
did
created_at (timestamp)
seen (boolean)
reason
target_uri
TODO:
- bsky_badge (etc)
- bsky_media_embed
|