diff options
Diffstat (limited to 'posts')
-rw-r--r-- | posts/2016/elm-everything-broken.md | 4 | ||||
-rw-r--r-- | posts/2016/juliacon.md | 2 | ||||
-rw-r--r-- | posts/2016/new-server-2016.md | 2 | ||||
-rw-r--r-- | posts/2019/bike-sf-la.md | 2 | ||||
-rw-r--r-- | posts/2019/three_spirits_libre_software.md | 2 | ||||
-rw-r--r-- | posts/2020/cascade-volcanoes.md | 56 | ||||
-rw-r--r-- | posts/2022/atproto_thoughts.md | 97 | ||||
-rw-r--r-- | posts/2024/atproto_progress.md | 137 | ||||
-rw-r--r-- | posts/biblio-metadata-collections.md | 2 | ||||
-rw-r--r-- | posts/merkle-design.md | 6 | ||||
-rw-r--r-- | posts/modelthing-background.md | 79 |
11 files changed, 339 insertions, 50 deletions
diff --git a/posts/2016/elm-everything-broken.md b/posts/2016/elm-everything-broken.md index a9978d1..58b848f 100644 --- a/posts/2016/elm-everything-broken.md +++ b/posts/2016/elm-everything-broken.md @@ -321,7 +321,7 @@ still allowing rapid evolution of a package "ecosystem". Cargo was designed by feeling are usually that system-wide package managers (like Debian's `apt`) are underappreciated by many young-but-not-bleeding-edge projects, but acknowledge that there probably is also a need for higher tempo cross-platform project -dependency mangement for non-library projects (eg, desktop applications and web +dependency management for non-library projects (eg, desktop applications and web apps). Ironically (given the difficulty I had installing it), the Elm language's @@ -348,7 +348,7 @@ For [example](https://gist.github.com/badboy/a302dd0c9020e5759240): defaultOptions : Html.Events.Options onWithOptions : String -> Html.Events.Options -> Json.Decode.Decoder a -> (a -> Signal.Message) -> Html.Attribute -This API change information is then used to *[programatically enforce][7]* the +This API change information is then used to *[programmatically enforce][7]* the semantic versioning rules for submissions to the Elm language library archive and prevent a whole class of simple but annoying breakages due to unexpected API changes. It can't detect *every* breaking change (eg, those which are diff --git a/posts/2016/juliacon.md b/posts/2016/juliacon.md index 52751a9..c0e916a 100644 --- a/posts/2016/juliacon.md +++ b/posts/2016/juliacon.md @@ -287,7 +287,7 @@ difficult 1.0 process first. It was mentioned that 0.6 would be the last of the things should generally be backwards compatible. Separate from Stefan's talk, there was a short overview of progress on the next -iteration of the Julia package and dependency manger, called Pkg3. The goals +iteration of the Julia package and dependency manager, called Pkg3. The goals were described as "a mash-up of virtualenv and cargo": virtualenv is a tool for isolating per-application dependencies and toolchains in Python, and Cargo is is the Rust dependency manager and build tool (which is also used in a diff --git a/posts/2016/new-server-2016.md b/posts/2016/new-server-2016.md index 8f2245b..8eb5449 100644 --- a/posts/2016/new-server-2016.md +++ b/posts/2016/new-server-2016.md @@ -47,7 +47,7 @@ settled yet: I haven't moved email, and I'm not sure if I'll stick with [pelican]: http://blog.getpelican.com/ [linode]: https://www.linode.com -[digial ocean]: https://digitalocean.com +[digital ocean]: https://digitalocean.com [infra]: http://git.bnewbold.net/infra/ [mediagoblin]: http://mediagoblin.org diff --git a/posts/2019/bike-sf-la.md b/posts/2019/bike-sf-la.md index 6518e87..e72b9d4 100644 --- a/posts/2019/bike-sf-la.md +++ b/posts/2019/bike-sf-la.md @@ -127,7 +127,7 @@ daughter/father pair). After passing Hearst Castle, we started slowly re-entering denser civilization, passing the beach town of Cayucos and finally Moro Bay. I drove to Hearst -Castle one many years ago with friends, but neither Lucy nor I had ever been +Castle once many years ago with friends, but neither Lucy nor I had ever been south of there through LA. Moro bay was an unexpected delight: the huge rock is surreal, like a fantasy novel, and the hiker-biker campsite in town was a delight, with hot showers and a generous area to ourselves under giant diff --git a/posts/2019/three_spirits_libre_software.md b/posts/2019/three_spirits_libre_software.md index ecd12ff..e6b3b13 100644 --- a/posts/2019/three_spirits_libre_software.md +++ b/posts/2019/three_spirits_libre_software.md @@ -25,7 +25,7 @@ the goals and vision members are pursuing. I feel like I spend a lot of time re-telling my own version of "what we're really trying to achieve here" and disclaiming strawman arguments. Sometimes these discussions are with critics, but just as often they are with disillusioned contributors who feel like they -are loosing the fight. Having these in a written form is something coherent to +are losing the fight. Having these in a written form is something coherent to point to, and also gives me a framework to gauge progress (or lack there of) in the future. diff --git a/posts/2020/cascade-volcanoes.md b/posts/2020/cascade-volcanoes.md new file mode 100644 index 0000000..778c44d --- /dev/null +++ b/posts/2020/cascade-volcanoes.md @@ -0,0 +1,56 @@ +Title: Trip Report: Cascade Volcanoes +Author: bnewbold +Date: 2020-07-19 +Tags: trip-report, biking +Status: draft + + +- route overview + => google maps: https://goo.gl/maps/YhkzzWePNQgbbE2m6 +- lassen hike + => cinder cone + => bike ride looks good + => sulfur + => covid-19 masks + => lassen summit +- lava tubes +- internment camp +- crater lake +- sisters (mt), bend, smith rocks +- mckenzie pass +- painted hills + => commet +- john day area + => ride-across-oregon guy (!) +- hops, rainier over cascades, hot, seattle cool <3 + +<!-- single photo template +<center> +<a href="/photos/2019/sfla/DSC00489.JPG.html"> + <img src="/static/fig/2019/DSC00489.JPG" alt="DSC00489.JPG" title="DSC00489.JPG" width=750px"> +</a> +</center> +--> + +<!-- thumbnail template +<div> +<a href="/photos/2019/sfla/DSC00610.JPG.html"> + <img src="/static/fig/2019/DSC00610.thumb.JPG" alt="DSC00594.JPG" title="DSC00594.JPG" width="245px"> +</a> +<a href="/photos/2019/sfla/DSC00612.JPG.html"> + <img src="/static/fig/2019/DSC00612.thumb.JPG" alt="DSC00587.JPG" title="DSC00587.JPG" width="245px"> +</a> +<a href="/photos/2019/sfla/DSC00618.JPG.html"> + <img src="/static/fig/2019/DSC00618.thumb.JPG" alt="DSC00618.JPG" title="DSC00618.JPG" width="245px"> +</a> +</div> +--> + +<!-- sidebar template +<div class="sidebar"> +The <a href="https://www.adventurecycling.org/routes-and-maps/adventure-cycling-route-network/pacific-coast/">Adventure Cycling Association</a> maps we used on this trip are +great! They can be read at a glance, are well partitioned, and cover in-city +routes well. I find phones very distracting, and love being able to navigate by +map and bike odometer instead. +</div> +--> diff --git a/posts/2022/atproto_thoughts.md b/posts/2022/atproto_thoughts.md new file mode 100644 index 0000000..9696abb --- /dev/null +++ b/posts/2022/atproto_thoughts.md @@ -0,0 +1,97 @@ +Title: What is atproto.com good for? +Author: bnewbold +Date: 2022-11-23 +Tags: tech, dweb + +Bluesky released early documentation for the ["AT +Protocol"](https://atproto.com) (atproto) a few weeks ago, and I've been +noodling around with it. Technically, it strikes an appealing balance between +rigid cryptographically-signed content-addressable storage on the one hand, and +familiar web-friendly schemas and integrations on the other. But at an +ecosystem level, there are already a bunch of existing open social media +projects. Does atproto bring anything interesting to the table? How might it fit +in compared to other similar protocols? + +First, as quick background, atproto is a dweb social media protocol which +aspires to replace Twitter as a centralized platform. Bluesky, the organization +developing it, is a small company with history intertwingled with Jack Dorsey +and Twitter itself. The folks there also have ties to more established dweb +tech projects like IPFS, Scuttlebutt, and dat. + +What sets atproto apart from other dweb and fediverse projects is that it is +explicitly trying to support some of the “big world” features of Twitter. This +means global discovery and “leaderboard” metrics (“likes”, “followers”), and +also means “broadcast” content that gets rapidly replicated to millions +(billions?) of users. It also supports, to some degree, the ability to +redistribute and discuss pieces of content outside of their original context +(“context collapse”). + +I myself mostly dislike these properties for social media, but I do think they +have positive social value in some cases. For example, short-form official +announcements (eg, local weather warnings, flash flood alerts, public transit +disruption), or short-form journalism (eg, as live blogging breaking events). +I do not have a Twitter account, but some of the use cases that I personally +still end up going there for today include local breaking news (what is that smoke +cloud in my city, what is happening at a protest); seeing what “anybody” is +saying about a project (eg, search by project name or domain name); checking if +people or institutions are A Thing (what do they say in public feed, who is +interacting with them); and generally what individual people or institutions +are up to. These are all "big world" use cases that can't be met by the circle +of folks a couple social hops from me. + +It does feel to me that some these use-cases were well served by older web and +indieweb tech, like (micro)blogs and RSS. Especially for the last case (“what +are people up to”), which depending on the person may best be found on a +homepage or blog. Maybe if social platforms were more open and had better +sitemap tech then generic search engines could provide the big world features? + +But many current dweb/fediverse projects try to specifically steer away from +“big world” aggregations, and instead focus on “small world” in-community +discussion. They do provide the technical ability to engage across communities +and with the broader public. But I suspect many want to avoid rapid +aggregation, leaderboards, and global discovery. + +My take is that atproto should explicitly double-down on these use cases, +because others are not. The project should also try to support existing +(indie)web protocols like RSS and (possibly) ActivityPub. I don’t think they +should directly try to support private messaging (leave that to Signal and +Matrix, maybe with some identity/contact level interop), or forum-like +small-world discussion with community-level norms (leave that to Discourse for +web-index-able stuff, or SSB, or Mastodon). + +Speaking of ActivityPub, I see two main contrasts against atproto. The first is +that atproto specifies how user content should be canonically **stored**, while +ActivityPub specifies **event notifications** between servers. An analogy is +that ActivityPub is more like RSS (in which content may be truncated or +otherwise non-canonical in an RSS feed) matter much), while atproto is more +like a git repo (original content is transferred in canonical form; there is +some awkwardness about large blobs/media). I think the atproto way makes it +easier for an ecosystem to be interoperable in the long run, reduces the stress +and obligations of hosting content on servers (because it is easy to backup and +migrate), and empowers individual users. The other big contrast is +full-strength account migration support in atproto, which works even without +any participation by former hosting providers. + +This last feature, building on [decentralized identifiers +(DIDs)](https://en.wikipedia.org/wiki/Decentralized_identifier), is in my view +the least mature and riskiest part of the currently proposed system. DID is a +W3C specification, but really feels like it comes from the blockchain/web3 +world. did:web does exist and should work fine, but itself is a big nothing +burger because it does not enable the interesting account migration features +that a true DID would. It should be possible to implement something like +[Certificate +Transparency](https://en.wikipedia.org/wiki/Certificate_Transparency) to do +global-trusted and rapidly resolvable DIDs without wasteful proof-of-whatever, +but that would require an effort and institution like Let’s Encrypt did for SSL +certificates. It is unclear if or when that might actually happen. As it stands +today DID has a pile of good intentions and standardization scaffolding, but in +reality is just blockchain and vaporware. + +--- + +As part of noodling around with the protocol, I wrote a simple partial +command-line tool and personal data server (PDS), +[adenosine](https://gitlab.com/bnewbold/adenosine). You can check out the +minimal web interface at the examples +[pierre-manard.robocracy.org](https://pierre-manard.robocracy.org) and +[voltaire.demo.adenosine.social](https://voltaire.demo.adenosine.social). diff --git a/posts/2024/atproto_progress.md b/posts/2024/atproto_progress.md new file mode 100644 index 0000000..6a1a0c6 --- /dev/null +++ b/posts/2024/atproto_progress.md @@ -0,0 +1,137 @@ +Title: Progress on atproto Values and Value Proposition +Author: bnewbold +Date: 2024-08-12 +Tags: tech, dweb + +It has been a wild 18 months working at Bluesky on atproto. Goalposts and narratives have a way of getting pulled around by crises and the priorities of the week, making it is easy to lose track of the big picture. In this post, i'm dredging up some of my own early notes on goals and values for the protocol, to see how much progress has been made. + +When I started in January 2023 the company was shifting from an R&D project to building out a real-world product, while at the same time finishing design of large social-technical components of the protocol. As a tiny team we designed and implemented the repo synchronization protocol, core moderation system, and "app-view" concept all in a couple months. While at the same time adding basic product features (search! Android!), and growing the early community to tens of thousands of users. + +Fast-forwarding a year and a half, today the network is openly federated with hundreds of PDS instances. It is home to millions of user accounts, there are thousands of independent currated feeds, and dozens of independent labeling services are in operation. + +I should be clear that the roadmap below is a personal and historic artifact. This was never a roadmap for the whole team. In particular, team's overall philosophy is that developing and prioritizing "product" and "protocol" together will lead to better outcomes. I'm taking a more "protocol" perspective here. + + +## Open/Libre Protocol + +**Goals:** The protocol itself should have a written specification, with no restrictions on reuse. Independent parties should be able to implement it and interoperate with no concerns about intellectual property. The most popular and influential implementations and service providers should have good (if not always perfect) compliance with the specification. + +The future development of the protocol should have a clear, neutral, trustworthy governance process. There should be resiliency against "embrace/extend/extinguish" by any large providers. + +**Milestones and Capabilities**: + +- ✅ open written specification of existing protocol +- ✅ open source software reference implementation +- ✅ open license on protocol spec text +- ⬜ open compliance/interoperation test suite +- ⬜ independent protocol governance (eg, a standards body) + +**Summer 2024 Status**: Doing an Ok job keeping production reality and written specs synchronized; gaps between real-world behavior and documentation are common with this sort of project. There are some areas of written specs to be completed or updated. Standards body work will require more independent stakeholders to get started. + + +## Credible Exit + +**Goals:**: There should be no technical or social single-point-of-failure for the overall protocol and network. There should be no single organization or individual who can entirely exclude others from the ecosystem (though the ecosystem may *collectively* exclude bad actors). There should be multiple independent interoperating service providers for each infrastructure component. + +**Milestones and Capabilities**: + +- ✅ open firehose and public repositories +- ✅ open/libre PDS implementation which supports self-hosting +- ✅ open/libre Relay implementation +- ✅ open/libre AppView implementation +- ✅ open/libre client app implementation +- ✅ account migration functionality in production network +- ✅ open federation in the production network +- ⬜ `did:plc` transparency log and multiple replica/witness/operator parties +- ⬜ `did:plc` as a separate legal entity +- ⬜ multiple independent PDS providers with open registration +- ⬜ multiple independent Relay services +- ⬜ multiple independent AppView services + +**Summer 2024 Status**: Great progress on proven technical functionality and available open software. Serious independent infrastructure operators are lacking for several components, but it is still early days. PLC socio-technical path forward has solidified compared to early 2023, and centralization risk with PLC is manageable. + + +## Own Your Identity and Data + +**Goals:** Individual account holders should have ultimate control over their network identity, and retain ownership of content they create and contribute to the network. + +**Milestones and Capabilities**: + +- ✅ repo export (CAR download) and parse/dump tools +- ✅ `did:plc` supports rotation keys +- ✅ custom domain handles (own your handle) +- ✅ accessible repo export (in-app download CAR button) +- ✅ personal private data export mechanism (eg, preferences) +- ⬜ data reuse intent mechanism (something like `robots.txt`) +- ⬜ accessible `did:plc` identity control (in-app and/or supporting tools) + +**Summer 2024 Status**: This has a bright spot for the protocol, with most core functionality enabled early on. It would be great for more users to have recovery keys registered for their PLC identities; this will require UI/UX work and encouragement. Though the functionality is still a win even if adopted only by high-stakes or institutional accounts. + + +## Algorithmic Choice + +**Goals:** Users should have individual control over what content they see and what is recommended to them. New entrants (communities, companies, etc) should be able to provide curation and discovery services. + +**Milestones and Capabilities**: + +- ✅ transparent timeline "algorithm" (open source implementation) +- ✅ feed generator system +- ✅ lists (for content curration) +- ✅ easier developer access to existing public mod labels (eg, label firehose) +- ✅ hashtags +- ✅ interaction feedback API for feed generators +- ⬜ cheaper firehose subscription at scale (filters/variations of existing firehose) + +**Summer 2024 Status**: The protocol has been fairly successful in this area. The feed tooling ecosystem in particular has made feed curration relatively accessible, with tens of thousands of feeds created. + + +## Composable Multi-Party Moderation + +There should not be a single organization or party who has unique control of moderation policy and enforcement across the entire global network. It should be possible for new entrants to participate in moderation of any subset of the existing network. + +This area is central to the entire project! + +**Milestones and Capabilities**: + +- ✅ core labeling system and moderation SDK +- ✅ individual interaction controls (thread gates, blocks, mutes) +- ✅ account lists for moderation (mod lists) +- ✅ moderation service protocol support (labeling and reports) +- ✅ Ozone mod service software open source and self-hostable +- ⬜ inter-provider infrastructure takedowns: delegation and notifications +- ⬜ scenes, communities, or similar “structure” to bsky app network, as scopes for moderation + +**Summer 2024 Status**: The independent labeler service system launched later than other components. Independent moderation services have seen successes and failures, and it will probably take more time to see where the ecosystem lands. + + +## Foundation for New Apps + +atproto should be a reasonable choice for small teams to build new applications, even if they don't particularly care about decentralized protocols. That is, social graph network effects and features provided by the protocol ("problems solved", "value created") should outweigh added complexity, friction, or inefficiencies ("problems introduced", "costs"). + +**Milestones and Capabilities**: + +- ✅ protocol architecture designed to support multiple Lexicons and apps +- ✅ allow non-bsky records in repos (PDS+Relay) +- ✅ enable PDS proxying to arbitrary services (eg, new AppView) +- ⬜ better auth mechanism (OAuth) +- ⬜ docs, resources, examples for creating new Lexicons and end-to-end applications +- ⬜ generic Lexicon validation (resolution process, etc) +- ⬜ SDK and infra shown to work well (first-class) with non-bsky Lexicons + +**Summer 2024 Status**: This is the area we have invested the least visible time and effort towards to date, though this was always going to be a later-stage goal. A lot of early work went in to ensuring this would be possible, but only recently has it even been possible to write arbitrary records or proxy to independent services. There are a growing number of early-adopter apps, the developers of which have had to reverse-engineer many aspects of the protocol. + + +## How Did It Go? + +The huge waves of press, user growth, and infrastructure demands in 2023 were challenging, but more than we could have hoped for. An intensely engaged community moved in with strong opinions and high expectations around product features and community dynamics. It has been an amazing opportunity to ship high-concept network features to a large and more-or-less receptive audience. We've had a never-sleeping, ever-mischievous developer scene watching every git commit and repeatedly front-running our product launches. + +A large contingent of the network does not give a shit about protocols, adverserial interop, enshittification, protocol bridging, or moderation across geo-political and cultural borders. I'm so proud that those folks have had reason to stick around. I think that set of concerns is important and will be positive differentiators for atproto and Bluesky in the long run. But realistically, we need to create a fun no-compromises environment where people want to invest time and energy, or none of it will matter very much. + +In retrospect, it feels like the big "launch" milestones relevant to the protocol were: + +- [custom domain handles](https://bsky.social/about/blog/4-28-2023-domain-handle-tutorial) in April 2023 +- [feed generators](https://bsky.social/about/blog/7-27-2023-custom-feeds) in July 2023 +- [open federation](https://bsky.social/about/blog/02-22-2024-open-social-web) in February 2024 +- [independent labelers and Ozone](https://bsky.social/about/blog/03-12-2024-stackable-moderation) in March 2024 + +Handles and feed generators came fairly easy. There was a federation sandbox in May 2023, and we were close to opening up much sooner than we did, but waited to ensure the servers and moderation systems were robust to trolls and spammers. Finally getting to federation and launching Ozone were long projects, but came out better for the effort, and left me feeling like we had crested the hilltop with most of the original big ideas in place. We are very close to landing OAuth, another long project, which along with other developer polish could end up being a symbolic milestone for building independent apps. diff --git a/posts/biblio-metadata-collections.md b/posts/biblio-metadata-collections.md index d7f8713..91bafe8 100644 --- a/posts/biblio-metadata-collections.md +++ b/posts/biblio-metadata-collections.md @@ -12,7 +12,7 @@ I've recently been lucky enough to start working on a new big project at the [Internet Archive][]: collecting, indexing, and expanding access to research publications and datasets in the open world. This is perhaps *the* original goal of networked information technology, and thanks to a decade of hard -work by the Open Access movement it feels like intertia +work by the Open Access movement it feels like inertia [is building][nature-elsevier] towards this one small piece of "universal access to all knowledge". diff --git a/posts/merkle-design.md b/posts/merkle-design.md index b388dab..707e194 100644 --- a/posts/merkle-design.md +++ b/posts/merkle-design.md @@ -15,7 +15,7 @@ My interest in these systems is as infrastructure for the commons of cultural and intellectual works: making it cheap and easy to publish content that becomes easily accessible and woven in to the web of reference and derivative works. From my work at the Internet Archive collecting Open Access -publications and datasets, I have a particular interest in dereferencable links +publications and datasets, I have a particular interest in dereferenceable links (and/or citations) that will work in the future, [wp-merkle]: https://en.wikipedia.org/wiki/Merkle_tree @@ -27,7 +27,7 @@ system: the wire. If every distinct file can be identified by only a single, reproducible name, then discovery, indexing, and de-duplicaiton is made easier. If the same file can end up with different names, then that file might be -transfered or stored separately by default; this creates pressure for the +transferred or stored separately by default; this creates pressure for the application layer to support the concept of "many identifiers for the same file", and requires additional coordination at scale. @@ -94,7 +94,7 @@ folks love to think about re-inventing "everything" on top of such a system. I think this is because git supplies specific semantic features people love, while being deeply entangled with files and file systems. Computer engingeering is All About Files, and git is both made out of files (look in .git; it's -simple files and directories all the way down!) and accomodating files. +simple files and directories all the way down!) and accommodating files. Consider: diff --git a/posts/modelthing-background.md b/posts/modelthing-background.md index 9234f70..51bd2db 100644 --- a/posts/modelthing-background.md +++ b/posts/modelthing-background.md @@ -4,56 +4,55 @@ Date: 2020-06-28 Tags: modelthing Status: draft -This post describes the potential I see for collaborative infrastructure to -agument group research and understanding of mathematical models. This type of -model, consisting of symbolic equations than can be manupulated and computed by -both humans and machines, have historically been surprisingly effective at -describing the natural world. A prototype exploring some of these ideas is -running at [modelthing.org](https://modelthing.org). - -After describing why this work is interesting and important to me personally, I -will describe a vision of what augmentation systems might look like, describe -some existing tools, then finally propose some specific tools to build and -research questions to answer. - -Outline - -* personal backstory - => technologist essay - => my previous work -* what would be better? -* existing ecosystem - => latex, mathml - => modelica - => SBML -* proposed system and research questions - => modelthing.org -* reference list +This post describes the potential for collaborative infrastructure to augment +human research and understanding using mathematical models. These models, +consisting of symbolic equations which are semantic and machine-readable, have +historically been "unreasonably effective" at describing the natural world. A +prototype exploring some of these ideas is running at +[modelthing.org](https://modelthing.org). + +After describing why I am personally interested in this work, I will describe a +vision of what augmentation systems might look like, describe some existing +tools, then finally propose some specific tools to build and research questions +to answer. ## Personal Backstory -*Feel free to skip this section* +*Feel free to skip this section...* Much of my university (undergraduate) time studying physics was spent exploring computational packages and computer algebra systems to automate math. These included general purpose computer algebra or numerical computation systems like -Mathematica, MATLAB, Numerical Recipies in C, SciPy, and Sage, as well as +Mathematica, MATLAB, Numerical Recipes in C, SciPy, and Sage, as well as real-time data acquisition or simulation systems like LabView, ROOT, Geant4, and EPICS. I frequently used an online system called Hyperphysics to refresh my memory of basic physics and make quick calculations of things like Rayleigh scattering, and often wished I could contribute to and extend that website to -more areas of math and physics. In some cases these computational resources +more areas of math and physics. In some cases these computational resources made it possible to skip over learning the underlying methods and math. A symptom of this was submitting problem set solutions typeset on a computer (with LaTeX), then failing to solve the same problems with pen and paper in exams. +<center> +<a href="http://hyperphysics.phy-astr.gsu.edu/hbase/geoopt/refr.html"> + <img src="/static/fig/hyperphysics_index_refraction.png" alt="hyperphysics screenshot" title="hyperphysics screenshot" height=500px"> +</a> +<div class="content_caption"> +Example record in Hyperphysics, which has been ported from Hypercard to the web +</div> +</center> + <div class="sidebar"> <img src="/static/fig/sicm_cover.jpg" width="150px" alt="SICM book cover"><br> +This isn't to say that computers as a pedagogical tool can replace +human mentorship and interaction; the SICM course was also one of the most +instructor-intensive and peer-interactive of any I took. And of course this +learning format will not be best for everybody. </div> A particularly influential experience late in my education was taking a course -on classical mechanics using the Scheme programing language, taught by the +on classical mechanics using the Scheme programming language, taught by the authors of "Structure and Interpretation of Classical Mechanics" (SICM). The pedagogy of this course really struck a chord with me. Instead of learning how to operate a complex or even proprietary software black box, students learned @@ -63,13 +62,6 @@ confusion or misunderstanding of the physics than computer science. I came to believe while teaching another human is the *best* way to demonstrate deep knowledge of a subject, teaching to a *computer* can be a pretty good start. -<div class="sidebar"> -This isn't to say that computers as a pedagogical tool can replace -human mentorship and interaction; the SICM course was also one of the most -instructor-intensive and peer-interactive of any I took. And of course this -learning format will not be best for everybody. -</div> - Some years later, I found myself at a junction in my career and looking for a larger project to dig in to. I think of myself as a narrative-motivated individual, and was struggling to make a connection between my specific skills @@ -125,10 +117,15 @@ Some best practices: acceptable (and often desirable) for software tools. * **Scale up and down** -examples of applying core goal: --> "does veganism make sense" --> COVID-19 modeling --> understand equilibrium finances of large companies/institutions, for the people inside those institutions ("business model") +Examples of applying core goal: + +* "earth systems" and ecosystems +* robotic control systems +* "does veganism make sense" +* COVID-19 modeling +* systems biology +* understand equilibrium finances of large companies/institutions, for the + people inside those institutions (aka, "business model") ## Existing Ecosystem @@ -151,6 +148,8 @@ Proposed system to build: * tooling/systems to combine and build large compound models from components * public wiki-like catalog to collect and edit models +Research questions: + Will mathematics continue to be "unreasonably effective" in the natural sciences as we try to understand larger and more complex systems? |