|author||bnewbold <email@example.com>||2016-06-11 17:09:32 -0400|
|committer||bnewbold <firstname.lastname@example.org>||2016-06-11 17:09:32 -0400|
clean up networking folder
Diffstat (limited to 'ideas')
1 files changed, 108 insertions, 0 deletions
diff --git a/ideas/MOSS.page b/ideas/MOSS.page
new file mode 100644
@@ -0,0 +1,108 @@
+Many User Operating System Stuff
+:Author: Bryan Newbold
+Moss is a vague concept I have for an operating-system-like-system that
+attempts to realize some of the promises of distributed universal computing
+and user management, continuation-based serializable programming, persistent
+data accessibility, file and process versioning, central foreign function
+management, and code/content distribution. It'll be easy!
+Moss would probably start as "stuff": middleware, userland applications,
+utilities, and native shell and GUI interfaces. It could also be a
+separate hosted virtual machine, a monolithic application, a kernel
+extension, or ultimately run alone over a high performance shim host OS.
+Distribution would be self hosting and viral: users would replicate a copy
+of the system from a friend instead of from a central server, patches
+and applications would be distributed word-of-mouth, and trust networks
+would form naturally via this distribution. Customization and feature sets
+would be passed on, which makes it likely that a user would receive a
+system already tweaked for their own needs and computing knowledge level.
+*Existing Projects:* Inferno, Xen, vmware, Java, GNU/*
+Universal, distributed file system
+The core of the system would be a universally accessible identity-based
+operating system. Some authoritive domain would probably be required, but
+public identity brokers would allow anonymous identities. "Strong
+Cryptography" is a goal, allowing a user's content to be hosted/cached
+on third party machines in an encrypted form. The real challenge of course
+is a flexible crypto system than can be transitioned or upgraded if a flaw
+is discovered without total data loss.
+My dream path would look something like::
+From the application end there would be no concept of "local" or "remote"
+files to a particular machine, though perhaps some feedback on access time.
+So, for instance, once tokens/authentication is handled, user utilities
+like ``mv`` or ``cat`` could be applied, instead of ``scp`` or ``rcat``.
+Versioning, write locks, etc would have to be considered.
+*Existing projects:* OpenAFS, freeNet, ssh, kerberos, git
+The state/continuation/environment of a running program or chain of
+programs should be a "first level object": a bundle of data like any other
+that can be transmitted, copied, and stored away for later. A user should
+be able to drag an entire application running on a desktop computer
+onto their laptop when then need to travel, or from laptop to workstation
+if then need additional computing power. Distributed computing could be
+implemented by bundling up applets that are shot off to a cluster or
+higher performance computer for processing, and the result state of the
+program would simply be bundled back to the requesting client. Such bundles
+wouldn't be very large: data would be stored on the distributed filesystem,
+which appears identical (*exactly?*) to every node on the network.
+Properly written, such a serializable system could also lead to performance
+and power consumption savings by swapping idle programs and processes to
+disk, or let low-usage nodes shift their processes off to other nodes
+and power down.
+*Existing Projects:* Lisp, Stackless
+Foreign Function Management
+It would be nice to see a move away from the library model for shared
+code to a more flexible/dynamic foreign function interface that would
+allow any appropriate code to announce its availability to other
+applications regardless of version, platform, coding language, etc.
+This would be a high-level feature, not intended to replace kernel level
+operations (read/write) but to make package/library management easier
+(it doesn't matter if an image conversion function is coming from a video
+editing package or libpng as long as it reads a raw array and returns
+a binary stream).
+There's room for dynamic optimization here: if program
+realizes it's native string manipulation library sucks for 5meg+ datasets
+it could look through the library and see if there's a better one.
+*And,* this too could be distributed, allowing super easy access to
+distributed computing resources; underutilized nodes could make their
+functions available to nearby nodes, or a machine with tons of matrix
+crunching silicon (eg high end video cards) could swap work units
+with a machine with a dedicated crypto chip or 64bit+ processor.
+*Existing Projects:* Script-fu from the Gimp