1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
|
---
format: rst
toc: no
...
====
MOSS
====
--------------------------------
Many User Operating System Stuff
--------------------------------
:Author: Bryan Newbold
Moss is a vague concept I have for an operating-system-like-system that
attempts to realize some of the promises of distributed universal computing
and user management, continuation-based serializable programming, persistent
data accessibility, file and process versioning, central foreign function
management, and code/content distribution. It'll be easy!
.. topic:: Implementation
Moss would probably start as "stuff": middleware, userland applications,
utilities, and native shell and GUI interfaces. It could also be a
separate hosted virtual machine, a monolithic application, a kernel
extension, or ultimately run alone over a high performance shim host OS.
Distribution would be self hosting and viral: users would replicate a copy
of the system from a friend instead of from a central server, patches
and applications would be distributed word-of-mouth, and trust networks
would form naturally via this distribution. Customization and feature sets
would be passed on, which makes it likely that a user would receive a
system already tweaked for their own needs and computing knowledge level.
*Existing Projects:* Inferno, Xen, vmware, Java, GNU/*
.. topic:: Universal, distributed file system
The core of the system would be a universally accessible identity-based
operating system. Some authoritive domain would probably be required, but
public identity brokers would allow anonymous identities. "Strong
Cryptography" is a goal, allowing a user's content to be hosted/cached
on third party machines in an encrypted form. The real challenge of course
is a flexible crypto system than can be transitioned or upgraded if a flaw
is discovered without total data loss.
My dream path would look something like::
/net/user@some.domain.tld/media/ledzep/tangerine.mp3
From the application end there would be no concept of "local" or "remote"
files to a particular machine, though perhaps some feedback on access time.
So, for instance, once tokens/authentication is handled, user utilities
like ``mv`` or ``cat`` could be applied, instead of ``scp`` or ``rcat``.
Versioning, write locks, etc would have to be considered.
*Existing projects:* OpenAFS, freeNet, ssh, kerberos, git
.. topic:: Serializable Programs
The state/continuation/environment of a running program or chain of
programs should be a "first level object": a bundle of data like any other
that can be transmitted, copied, and stored away for later. A user should
be able to drag an entire application running on a desktop computer
onto their laptop when then need to travel, or from laptop to workstation
if then need additional computing power. Distributed computing could be
implemented by bundling up applets that are shot off to a cluster or
higher performance computer for processing, and the result state of the
program would simply be bundled back to the requesting client. Such bundles
wouldn't be very large: data would be stored on the distributed filesystem,
which appears identical (*exactly?*) to every node on the network.
Properly written, such a serializable system could also lead to performance
and power consumption savings by swapping idle programs and processes to
disk, or let low-usage nodes shift their processes off to other nodes
and power down.
*Existing Projects:* Lisp, Stackless
.. topic:: Foreign Function Management
It would be nice to see a move away from the library model for shared
code to a more flexible/dynamic foreign function interface that would
allow any appropriate code to announce its availability to other
applications regardless of version, platform, coding language, etc.
This would be a high-level feature, not intended to replace kernel level
operations (read/write) but to make package/library management easier
(it doesn't matter if an image conversion function is coming from a video
editing package or libpng as long as it reads a raw array and returns
a binary stream).
There's room for dynamic optimization here: if program
realizes it's native string manipulation library sucks for 5meg+ datasets
it could look through the library and see if there's a better one.
*And,* this too could be distributed, allowing super easy access to
distributed computing resources; underutilized nodes could make their
functions available to nearby nodes, or a machine with tons of matrix
crunching silicon (eg high end video cards) could swap work units
with a machine with a dedicated crypto chip or 64bit+ processor.
*Existing Projects:* Script-fu from the Gimp
|