aboutsummaryrefslogtreecommitdiffstats
path: root/docs
diff options
context:
space:
mode:
Diffstat (limited to 'docs')
-rw-r--r--docs/api.md29
-rw-r--r--docs/contents.json31
-rw-r--r--docs/cookbook/browser.md153
-rw-r--r--docs/cookbook/diy-dat.md135
-rw-r--r--docs/diy-dat.md41
-rw-r--r--docs/ecosystem.md38
-rw-r--r--docs/faq.md29
-rw-r--r--docs/how-dat-works.md6
-rw-r--r--docs/welcome.md27
9 files changed, 357 insertions, 132 deletions
diff --git a/docs/api.md b/docs/api.md
deleted file mode 100644
index 68a7c3f..0000000
--- a/docs/api.md
+++ /dev/null
@@ -1,29 +0,0 @@
-## 1.0 Architecture Design
-
-
- * dat: command-line api
- * dat-desk: desktop application
- * hyperdrive: storage layer
- * discovery-swarm: dat network swarm discovery mechanism
-
-## dat
-
-Command-line interface for dat
-
-#### `dat share DIR`
-
-Create a new dat link for the contents of the given directory. Prints a URL, which is a unique public key feed. This public key feed can be appended to.
-
-###### Options
-
- * `--append=URL`: Adds the new URL to the public key feed.
- * `--static`: Ensures that the URL cannot be appended to.
-
-#### `dat URL DIR`
-
-Downloads the link to the given directory, and then exits.
-
-###### Options
-
- * `--seed`: Downloads the link to the given directory and opens up a server that seeds it to the dat peer network.
- * `--list`: Fetches the metadata for the link and prints out the file list in the console.
diff --git a/docs/contents.json b/docs/contents.json
index 83f5d73..4ca151a 100644
--- a/docs/contents.json
+++ b/docs/contents.json
@@ -1,26 +1,17 @@
{
- "Introduction": {
- "Welcome to Dat": "welcome.md",
- "How Dat Works": "how-dat-works.md"
+ "Dat": {
+ "Introduction": "dat.md",
+ "How Dat Works": "how-dat-works.md",
+ "FAQ": "faq.md"
},
- "Specification": {
- "hyperdrive spec": "hyperdrive_spec.md",
- "sleep": "sleep.md"
+ "Cookbook": {
+ "Browser Dat": "cookbook/browser.md",
+ "DIY Dat": "cookbook/diy-dat.md"
},
- "References": {
- "API": "api.md",
- "DIY Dat": "diy-dat.md"
- },
- "Modules": {
+ "Ecosystem": {
"Overview": "ecosystem.md",
- "Interface": {
- "Dat Command Line": "dat.md",
- "dat.land": "dat.land.md",
- "Dat Desktop": "dat-desktop.md"
- },
- "Core": {
- "Hyperdrive": "hyperdrive.md",
- "Hypercore": "hypercore.md"
- }
+ "SLEEP": "sleep.md",
+ "Hyperdrive": "hyperdrive.md",
+ "Hypercore": "hypercore.md"
}
}
diff --git a/docs/cookbook/browser.md b/docs/cookbook/browser.md
new file mode 100644
index 0000000..22010e4
--- /dev/null
+++ b/docs/cookbook/browser.md
@@ -0,0 +1,153 @@
+# Browser Dat
+
+Dat is written in JavaScript, so naturally, it can work entirely in the browser! The great part about this is that as more peers connect to each other in their client, the site assets will be shared between users rather hitting any sever.
+
+This approach is similar to that used in Feross' [Web Torrent](http://webtorrent.io). The difference is that Dat links can be rendered live and read dynamically, whereas BitTorrent links are static. In other words, the original owner can update a Dat and all peers will receive the updates automatically.
+
+OK, now for the goods:
+
+## Hyperdrive
+
+For now, there isn't an easy dat implementation for the browser. We have a simpler interface for node at [dat-js](http://github.com/joehand/dat-js).
+
+If you want to get your hands dirty, here is the lower-level implementations to create a browser-based hyperdrive instance that will be compatible with dat.
+
+Hyperdrive will save the metadata (small) and the content (potentially large) separately. You can control where both of these are saved and how they are retrieved. These tweaks have huge impact on performance, stability, and user experience, so it's important to understand the tradeoffs.
+
+The first argument to `hyperdrive` will be the main database for all metadata and content. The `file` option can be supplied to specify how to read and write content data. If a `file` option is not supplied, the content will also be stored in the main database.
+
+```js
+var hyperdrive = require('hyperdrive')
+var drive = hyperdrive(<YOUR DATABASE HERE>, {file: <CONTENT DATABASE HERE>})
+```
+
+### The most basic example
+
+```js
+var hyperdrive = require('hyperdrive')
+var memdb = require('memdb')
+var swarm = require('hyperdrive-archive-swarm')
+
+var drive = hyperdrive(memdb())
+var archive = drive.createArchive()
+
+// joins the webrtc swarm
+swarm(archive)
+
+// this key can be used in another browser tab
+console.log(archive.key)
+```
+
+That's it. Now you are serving a dat-compatible hyperdrive from the browser. In another browser tab, you can connect to the swarm and download the data by using the same code as above. Just make sure to reference the hyperdrive you created before by using `archive.key` as the first argument:
+
+```js
+var drive = hyperdrive(memdb())
+var archive = drive.createArchive(<KEY HERE>)
+
+// joins the webrtc swarm
+swarm(archive)
+```
+
+For the full hyperdrive API and more examples, see the full [hyperdrive documentation](/hyperdrive).
+
+## Patterns for browser-based data storage and transfer
+
+There are a million different ways to store and retrieve data in the browser, and all have their pros and cons depending on the use case. We've compiled a variety of examples here to try to make it as clear as possible.
+
+### In-memory storage
+
+When the user refreshes their browser, they will lose all previous keys and data. The user will no longer be able to write more data into the hyperdrive.
+
+```js
+var hyperdrive = require('hyperdrive')
+var memdb = require('memdb')
+
+var drive = hyperdrive(memdb())
+var archive = drive.createArchive()
+```
+
+### Persistence with IndexedDB
+
+When the user refreshes their browser, their keys will be stored and retrieved.
+
+The best module to use for this is `level-browserify`:
+
+```js
+var hyperdrive = require('hyperdrive')
+var level = require('level-browserify')
+
+var drive = hyperdrive(level('./mydb'))
+var archive = drive.createArchive()
+```
+
+This will store all of the hyperdrive metadata *as well as content* in the client's IndexedDB. This is pretty inefficient. You'll notice that with this method that *IndexedDB will start to become full and the hyperdrive database will stop working as usual*.
+
+### Persistent metadata in IndexedDB with in-memory file content
+
+If you use level-browserify to store file content, you will quickly notice performance issues with large files. Writes after about 3.4GB will become blocked by the browser. You can avoid this by using in-memory storage for the file content.
+
+To do this, use [random-access-file-reader](https://github.com/mafintosh/random-access-file-reader) as the file writer and reader for the hyperdrive.
+
+```js
+var hyperdrive = require('hyperdrive')
+var level = require('level-browserify')
+var ram = require('random-access-memory')
+
+var drive = hyperdrive(level('./mydb'))
+var archive = drive.createArchive({
+ file: ram
+})
+```
+
+This works well for most cases until you want to write a file to hyperdrive that doesn't fit in memory.
+
+### Writing large files from the filesystem to the browser
+
+File writes are limited to the available memory on the machine. Files are buffered (read: copied) *into memory* while being written to the hyperdrive instance. This isn't ideal, but works as long as file sizes stay below system RAM limits.
+
+To fix this problem, you can use [random-access-file-reader](https://github.com/mafintosh/random-access-file-reader) to read the files directly from the filesystem instead of buffering them into memory.
+
+Here we will create a simple program that creates a file 'drag and drop' element on `document.body.` When the user drags files onto the element, pointers to them will be added to the `files` object.
+
+
+```js
+var drop = require('drag-drop')
+
+var files = {}
+
+drop(document.body, function (files) {
+ files[files[0].name] = files[0]
+})
+```
+
+Okay, that's pretty easy. Now let's add the hyperdrive. Hyperdrive needs to know what the pointers are, so when a peer asks for the file, it can read from the filesystem rather from memory. In other words, we are telling the hyperdrive which files it should index.
+
+```js
+var drop = require('drag-drop')
+var reader = require('random-access-file-reader')
+var hyperdrive = require('hyperdrive')
+var memdb = require('memdb')
+
+var files = {}
+
+var drive = hyperdrive(memdb())
+
+var archive = drive.createArchive({
+ file: function (name) {
+ return reader(files[name])
+ }
+})
+
+drop(document.body, function (files) {
+ files[files[0].name] = files[0]
+ // will index the file using hyperdrive without reading the entire file into ram
+ archive.append(files[0].name)
+})
+```
+
+## Unsatisfied?
+
+If you still aren't satisfied, come over to our community channels and ask a question. It's probably a good one and we should cover it in the documentation. Thanks for trying it out, and PRs always welcome!
+
+[![#dat IRC channel on freenode](https://img.shields.io/badge/irc%20channel-%23dat%20on%20freenode-blue.svg)](http://webchat.freenode.net/?channels=dat)
+[![datproject/discussions](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/datproject/discussions?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
diff --git a/docs/cookbook/diy-dat.md b/docs/cookbook/diy-dat.md
new file mode 100644
index 0000000..5c3c21e
--- /dev/null
+++ b/docs/cookbook/diy-dat.md
@@ -0,0 +1,135 @@
+# Build with Dat
+
+In this guide, we will show how to develop applications with the Dat ecosystem. The Dat ecosystem is very modular making it easy to develop custom applications using Dat.
+
+For any Dat application, there are three essential modules you will start with:
+
+1. [hyperdrive](https://npmjs.org/hyperdrive) for file synchronization and versioning
+2. [hyperdrive-archive-swarm](https://npmjs.org/hyperdrive-archive-swarm) helps discover and connect to peers over local networks and the internet
+3. A [LevelDB](https://npmjs.org/level) compatible database for storing metadata.
+
+The [Dat CLI](https://npmjs.org/dat) module itself combines these modules and wraps them in a command-line API. These modules can be swapped out for a similarly compatible module, such as switching LevelDb for [MemDB](https://github.com/juliangruber/memdb) (which we do in the first example). More details on how these module work together are available in [How Dat Works](how-dat-works.md).
+
+## Getting Started
+
+You will need node and npm installed to build with Dat. [Read more](https://github.com/maxogden/dat/blob/master/CONTRIBUTING.md#development-workflow) about our development work flow to learn how we manage our module dependencies during development.
+
+## Module #1: Download a File
+
+Our first module will download files from a Dat link entered by the user. View the code for this module on [Github](https://github.com/joehand/diy-dat-examples/tree/master/module-1).
+
+```bash
+mkdir module-1 && cd module-1
+npm init
+npm install --save hyperdrive memdb hyperdrive-archive-swarm
+touch index.js
+```
+
+For this example, we will use [memdb](https://github.com/juliangruber/memdb) for our database (keeping the metadata in memory rather than on the file system). In your `index.js` file, require the main modules and set them up:
+
+```js
+var memdb = require('memdb')
+var Hyperdrive = require('hyperdrive')
+var Swarm = require('hyperdrive-archive-swarm')
+
+var link = process.argv[2] // user inputs the dat link
+
+var db = memdb()
+var drive = Hyperdrive(db)
+var archive = drive.createArchive(link)
+var swarm = Swarm(archive)
+```
+
+Notice, the user will input the link for the second argument The easiest way to get a file from a hyperdrive archive is to make a read stream. `archive.createFileReadStream` accepts the index number of filename for the first argument. To display the file, we can create a file stream and pipe it to `process.stdout`.
+
+```js
+var stream = archive.createFileReadStream(0) // get the first file
+stream.pipe(process.stdout)
+```
+
+Now, you can run the module! To download the first file from our docs Dat, run:
+
+```
+node index.js 395e3467bb5b2fa083ee8a4a17a706c5574b740b5e1be6efd65754d4ab7328c2
+```
+
+You should see the first file in our docs repo.
+
+#### Module #1 Bonus: Display any file in the Dat
+
+With a few more lines of code, the user can enter a file to display from the Dat link.
+
+Challenge: create a module that will allow the user to input a Dat link and a filename: `node bonus.js <dat-link> <filename>`. The module will print out that file from the link, as we did above. To get a specific file you can change the file stream to use the filename instead of the index number:
+
+```js
+var stream = archive.createFileReadStream(fileName)
+```
+
+Once you are finished, see if you can view this file by running:
+
+```bash
+node bonus.js 395e3467bb5b2fa083ee8a4a17a706c5574b740b5e1be6efd65754d4ab7328c2 cookbook/diy-dat.md
+```
+
+[See how we coded it](https://github.com/joehand/diy-dat-examples/blob/master/module-1/bonus.js).
+
+## Module #2: Download all files to computer
+
+This module will build on the last module. Instead of displaying a single file, we will download all of the files from a Dat into a local directory. View the code for this module on [Github](https://github.com/joehand/diy-dat-examples/tree/master/module-2).
+
+To download the files to the file system, instead of to a database, we will use the `file` option in `hyperdrive` and the [random-access-file](http://npmjs.org/random-access-file) module. We will also learn two new archive functions that make handling all the files a bit easier than the file stream in module #1.
+
+Setup will be the same as before (make sure you install random-access-file and stream-each this time):
+
+```bash
+mkdir module-2 && cd module-2
+npm init
+npm install --save hyperdrive memdb hyperdrive-archive-swarm random-access-file stream-each
+touch index.js
+```
+
+The first part of the module will look the same. We will add random-access-file (and [stream-each](http://npmjs.org/stream-each) to make things easier). The only difference is that we have to specify the `file` option when creating our archive:
+
+```js
+var memdb = require('memdb')
+var Hyperdrive = require('hyperdrive')
+var Swarm = require('hyperdrive-archive-swarm')
+var raf = require('random-access-file') // this is new!
+var each = require('stream-each')
+
+var link = process.argv[2]
+
+var db = memdb()
+var drive = Hyperdrive(db)
+var archive = drive.createArchive(link, {
+ file: function (name) {
+ return raf(path.join('download', name)) // download into a "download" dir
+ }
+})
+var swarm = Swarm(archive)
+```
+
+Now that we are setup, we can work with the archive. The `archive.download` function downloads the file content (to wherever you specified in the file option). To download all the files, we will need a list of files and then we will call download on each of them. `archive.list` will give us the list of the files. We use the stream-each module to make it easy to iterate over each item in the archive, then exit when the stream is finished.
+
+```js
+var stream = archive.list({live: false}) // Use {live: false} for now to make the stream easier to handle.
+each(stream, function (entry, next) {
+ archive.download(entry, function (err) {
+ if (err) return console.error(err)
+ console.log('downloaded', entry.name)
+ next()
+ })
+}, function () {
+ process.exit(0)
+})
+```
+
+You should be able to run the module and see all our docs files in the `download` folder:
+
+```bash
+node index.js 395e3467bb5b2fa083ee8a4a17a706c5574b740b5e1be6efd65754d4ab7328c2
+```
+
+## Module #3: Sharing a file
+
+## Module #4: Sharing a directory of files
diff --git a/docs/diy-dat.md b/docs/diy-dat.md
deleted file mode 100644
index fb16fc1..0000000
--- a/docs/diy-dat.md
+++ /dev/null
@@ -1,41 +0,0 @@
-# DIY Dat
-
-This document shows how to write your own compatible `dat` client using node modules.
-
-The three essential node modules are called [hyperdrive](https://npmjs.org/hyperdrive), [hyperdrive-archive-swarm](https://npmjs.org/hyperdrive-archive-swarm) and [level](https://npmjs.org/level). Hyperdrive does file synchronization and versioning, hyperdrive-archive-swarm does peer discovery over local networks and the Internet, and level provides a local LevelDB for storing metadata. More details are available in [How Dat Works](how-dat-works.md). The [dat](https://npmjs.org/dat) module itself is just some code that combines these modules and wraps them in a command-line API.
-
-Here's the minimal code needed to download data from a dat:
-
-```js
-// run this like: node thisfile.js 4c325f7874b4070blahblahetc
-// the dat link someone sent us, we want to download the data from it
-var link = new Buffer(process.argv[2], 'hex')
-
-var Hyperdrive = require('hyperdrive')
-var Swarm = require('hyperdrive-archive-swarm')
-var level = require('level')
-var raf = require('random-access-file')
-var each = require('stream-each')
-
-var db = level('./dat.db')
-var drive = Hyperdrive(db)
-var archive = drive.createArchive(link, {
- file: function (name) {
- return raf(path.join(self.dir, name))
- }
-})
-var swarm = Swarm(archive)
-
-archive.open(function (err) {
- if (err) return console.error(err)
- each(archive.list({live: archive.live}), function (data, next) {
- var startBytes = self.stats.bytesDown
- archive.download(data, function (err) {
- if (err) return console.error(err)
- console.log('file downloaded', data.relname)
- next()
- })
- }, done)
-})
-
-```
diff --git a/docs/ecosystem.md b/docs/ecosystem.md
index 673c513..1c7a411 100644
--- a/docs/ecosystem.md
+++ b/docs/ecosystem.md
@@ -1,12 +1,26 @@
-If you want to go deeper and see the implementations we are using in the [Dat command-line tool](https://github.com/maxogden/dat), here you go:
-
-- [dat](https://www.npmjs.com/package/dat) - the main command line tool that uses all of the below
-- [discovery-channel](https://www.npmjs.com/package/discovery-channel) - discover data sources
-- [discovery-swarm](https://www.npmjs.com/package/discovery-swarm) - discover and connect to sources
-- [hyperdrive](https://www.npmjs.com/package/hyperdrive) - The file sharing network dat uses to distribute files and data. A technical specification / discussion on how hyperdrive works is [available here](https://github.com/mafintosh/hyperdrive/blob/master/SPECIFICATION.md)
-- [hypercore](https://www.npmjs.com/package/hypercore) - exchange low-level binary blocks with many sources
-- [bittorrent-dht](https://www.npmjs.com/package/bittorrent-dht) - use the Kademlia Mainline DHT to discover sources
-- [dns-discovery](https://www.npmjs.com/package/dns-discovery) - use DNS name servers and Multicast DNS to discover sources
-- [utp-native](https://www.npmjs.com/package/utp-native) - UTP protocol implementation
-- [rabin](https://www.npmjs.com/package/rabin) - Rabin fingerprinter stream
-- [merkle-tree-stream](https://www.npmjs.com/package/merkle-tree-stream) - Used to construct Merkle trees from chunks
+# Dat Module Ecosystem
+
+We have built and contributed to a variety of modules that support our work on Dat as well as the larger data and code ecosystem. Feel free to go deeper and see the implementations we are using in the [Dat command-line tool](https://github.com/maxogden/dat) and the [Dat-js](https://github.com/joehand/dat-js), the javascript Dat module.
+
+Dat embraces the Unix philosophy: a modular design with composable parts. All of the pieces can be replaced with alternative implementations as long as they implement the abstract API.
+
+## Public Interface Modules:
+
+* [dat](dat) - the command line interface for sharing and downloading files
+* [dat.land](dat.land) - repository for the [dat.land](https://dat.land) website, a public data registry and sharing
+* [dat desktop](dat-desktop) - dat desktop application for sharing and downloading files
+
+## File and Block Component Modules:
+
+* [hyperdrive](hyperdrive) - The file sharing network dat uses to distribute files and data. Read the technical [hyperdrive-specification](hyperdrive-specification) about how hyperdrive works.
+* [hypercore](hypercore) - exchange low-level binary blocks with many sources
+* [rabin](https://www.npmjs.com/package/rabin) - Rabin fingerprinter stream
+* [merkle-tree-stream](https://www.npmjs.com/package/merkle-tree-stream) - Used to construct Merkle trees from chunks
+
+## Networking & Peer Discovery Modules:
+
+* [discovery-channel](https://www.npmjs.com/package/discovery-channel) - discover data sources
+* [discovery-swarm](https://www.npmjs.com/package/discovery-swarm) - discover and connect to sources
+* [bittorrent-dht](https://www.npmjs.com/package/bittorrent-dht) - use the Kademlia Mainline DHT to discover sources
+* [dns-discovery](https://www.npmjs.com/package/dns-discovery) - use DNS name servers and Multicast DNS to discover sources
+* [utp-native](https://www.npmjs.com/package/utp-native) - UTP protocol implementation
diff --git a/docs/faq.md b/docs/faq.md
new file mode 100644
index 0000000..643f6bb
--- /dev/null
+++ b/docs/faq.md
@@ -0,0 +1,29 @@
+# FAQ
+
+## Is Dat different from hyperdrive?
+
+[Hyperdrive](http://github.com/mafintosh/hyperdrive) is a file sharing network originally built for dat.
+
+Dat uses hyperdrive and a variety of other modules. Hyperdrive and Dat are compatible with each other but hyperdrive is able to make more lower-level decisions. Dat presents a user-friendly interface and ecosystem for scientists, researchers, and data analysts.
+
+## How is Dat different than IPFS?
+
+## Is there JavaScript implementation?
+
+Yes, find it on GitHub: [dat-js](http://github.com/joehand/dat-js).
+
+## Is there any non-persistent JS Dat implementation?
+
+Not yet. Want to work on it? Start here to learn more: [dat-js](http://github.com/joehand/dat-js).
+
+## Is there an online dataset registry, like GitHub?
+
+Yes, but currently under heavy construction. See [dat.land](http://github.com/datproject/dat.land)
+
+## Is there a desktop application?
+
+Yes, but currently under heavy construction. See [dat-desktop](http://github.com/juliangruber/dat-desktop)
+
+## Do you plan to have Python or R or other third-party language integrations?
+
+Yes. We are currently developing the serialization format (like .zip archives) called [SLEEP](/sleep) so that third-party libraries can read data without reimplementing all of hyperdrive (which is node-only).
diff --git a/docs/how-dat-works.md b/docs/how-dat-works.md
index c4899af..64dc7fb 100644
--- a/docs/how-dat-works.md
+++ b/docs/how-dat-works.md
@@ -50,9 +50,9 @@ After feeding the file contents through the chunker, we take the chunks and calc
0 2 4 6
```
-Want to go lower level? Check out [How Hypercore Works](hyperdrive.md#how-hypercore-works)
+Want to go lower level? Check out [How Hypercore Works](https://github.com/datproject/docs/blob/master/docs/hyperdrive_spec.md#how-hypercore-works)
-When two peers connect to each other and begin speaking the Hyperdrive protocol they can efficiently determine if they have chunks the other one wants, and begin exchanging those chunks directly. Hyperdrive gives us the flexibility to have random access to any portion of a file while still verifying the other side isnt sending us bad data. We can also download different sections of files in parallel across all of the sources simultaneously, which increases overall download speed dramatically.
+When two peers connect to each other and begin speaking the hyperdrive protocol they can efficiently determine if they have chunks the other one wants, and begin exchanging those chunks directly. Hyperdrive gives us the flexibility to have random access to any portion of a file while still verifying the other side isn't sending us bad data. We can also download different sections of files in parallel across all of the sources simultaneously, which increases overall download speed dramatically.
## Phase 4: Data archiving
@@ -68,4 +68,4 @@ Because Dat is built on a foundation of strong cryptographic data integrity and
## Implementations
-This covered a lot of ground. If you want to go deeper and see the implementations we are using in the [Dat command-line tool](https://github.com/maxogden/dat), go to the [Dependencies](ecosystem) page
+This covered a lot of ground. If you want to go deeper and see the implementations we are using in the [Dat command-line tool](https://github.com/maxogden/dat), go to the [Dependencies](/ecosystem) page
diff --git a/docs/welcome.md b/docs/welcome.md
deleted file mode 100644
index 51d7e5e..0000000
--- a/docs/welcome.md
+++ /dev/null
@@ -1,27 +0,0 @@
-# dat
-
-Dat is a decentralized data tool for distributing data small and large.
-
-[![#dat IRC channel on freenode](https://img.shields.io/badge/irc%20channel-%23dat%20on%20freenode-blue.svg)](http://webchat.freenode.net/?channels=dat)
-[![datproject/discussions](https://badges.gitter.im/Join%20Chat.svg)](https://gitter.im/datproject/discussions?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge&utm_content=badge)
-[![docs](https://img.shields.io/badge/Dat%20Project-Docs-green.svg)](http://docs.dat-data.com)
-
-## About Dat
-
-Documentation for the Dat project is available at [docs.dat-data.com](http://docs.dat-data.com).
-
-### Key features:
-
- * **Live sync** folders by sharing files as they are added to the folder.
- * **Distribute large files** without copying data to a central server by connecting directly to peers.
- * **Intelligently sync** by deduplicating data between versions.
- * **Verify data integrity** using strong cryptographic hashes.
- * **Work everywhere**, including in the [browser](https://github.com/datproject/dat.land) and on the [desktop](https://github.com/juliangruber/dat-desktop).
-
-Dat embraces the Unix philosophy: a modular design with composable parts. All of the pieces can be replaced with alternative implementations as long as they implement the abstract API.
-
-### Ways to Use Dat
-
- * [Dat CLI](https://github.com/maxogden/dat): command line tool
- * [Dat Desktop](https://github.com/juliangruber/dat-desktop/): desktop application
- * [dat.land](https://github.com/datproject/dat.land): website application \ No newline at end of file