December 11, 2018
A few days ago, I shared my blog post, Getting to Xanadu, on Hacker News. I must say, I really enjoy the stories and the comments there. It’s an audience that’s a cut above the typical crowd. Exemplary of the kind of people you find hanging out at HN, a gentleman named Stephan took the time write some very deep and well thought-out comments on our forthcoming product, Mimix. Here are five important things I learned from that dialog. The unedited email conversation and some final, deep thoughts, are below.
Note: Stephan isn’t a user yet. We’re pre-release. But he’s certainly the kind of user we’d like to have, so I think of his letter that way.
1. Community Is Everything
Mimix is my baby and the first time I’ve open sourced a project. I’m not used to doing development in public and it’s taken some getting used to. I was hesitant to bare my soul somewhere like HN because I know what kind of quality people hang out there. But you know what? It’s been very valuable to do so. Especially in open source, a movement based around benefiting the end user, developing a community early on is everything.
2. Users Are Smarter Than You Think
Of course I expected HN readers to be smart, but I was delighted to find that lots of people are interested and well-informed on some of the deeper aspects of our subject matter. Too often, I think, corporations treat users like dummies. There’s a lot of great information out there from people who are interested in your product. It pays to pay attention to it.
3. Hard Ideas Need Easy Explanations
As much as people might get your product, it takes more steps than you think for the message to really be received. If your product is really revolutionary and does something better, it will seem obvious to you. It’s not to your users. I need to work on a lot of my own explanations and make things more clear. Do you?
4. Time Is Of The Essence
Another word for this is “courtesy.” Personally, I hate it when people aren’t responsive. When it comes to software, it’s insulting and it makes you feel like a product has been abandoned. I live by an open inbox and I reply to the people who write me. The result? Better quality stuff to read.
5. Consider Having a Wiki
Putting together this post which has a lot of valuable information has been something of a nightmare because non of it is easily indexed or cross-referenced. Since I don’t have Mimix today to write this with, I’m thinking a wiki would be a good idea. At least this way the pieces will get posted in sections that make sense. What do you think?
The Full Email Thread
Stephan’s original message…
Getting to Xanadu: today, we can easily identify a few reasons why Xanadu, apart from any of Ted’s “faults”, has difficulty to become real. One major aspect, in my opinion, is that with the advent of Word, DTP and the web, there’s little to no interest in text-related stuff in 2018, even less so money in that area, while at the same time ignoring hypes like AI or blockchain, augmented reality with incredible opportunities lies there untouched (how ironic that there is the 50th anniversary of Doug’s augmentation demo and AR isn’t a topic at all), so one would be stupid to invest in text stuff and ignore the opportunities for big $$$ (supposed one would be able to pull it off). That was very different in Ted’s time. Without the GUI and networking around yet, if people weren’t crunching numbers for a business, text on the computer was a great new idea, and the later 80s-style (dedicated) word processor hype demonstrates that easily. What people had in those days, is, again, lost in our days, and only few people know what it is or know that they could want it, and even if so, there are no offers, because it makes no sense economically/investment-wise. Sure, at least there’s the minimalist/distraction-free movement now, I have to spend the time to look through all those isolated, productized, proprietary things but fear that they’re not really usable for an open hypertext infrastructure.
Caption under the Xanadu mockup: Well, but neither Ted nor most other mortal beings had any access to Doug’s screens. Also, Ted claims that he wasn’t aware and didn’t know about Doug up to some point, don’t know if that mockup was earlier or later. Anyway, I think Ivan Sutherland was earlier than Doug with Sketchpad, so the fact that such graphics had been demonstrated by somebody some years earlier doesn’t help any of the other inventors with their own work, if it wasn’t available and not generally known. Remember, Doug got some inspiration from WW2 radar screen renderings.
The 17 Rules of Xanadu, by Ted Nelson
- Every Xanadu server is uniquely and securely identified.
- Every Xanadu server can be operated independently or in a network.
- Every user is uniquely and securely identified.
- Every user can search, retrieve, create and store documents.
- Every document can consist of any number of parts each of which may be of any data type.
- Every document can contain links of any type including virtual copies (“transclusions”) to any other document in the system accessible to its owner.
- Links are visible and can be followed from all endpoints.
- Permission to link to a document is explicitly granted by the act of publication.
- Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (“transclusions”) of all or part of the document.
- Every document is uniquely and securely identified.
- Every document can have secure access controls.
- Every document can be rapidly searched, stored and retrieved without user knowledge of where it is physically stored.
- Every document is automatically moved to physical storage appropriate to its frequency of access from any given location.
- Every document is automatically stored redundantly to maintain availability even in case of a disaster.
- Every Xanadu service provider can charge their users at any rate they choose for the storage, retrieval and publishing of documents.
- Every transaction is secure and auditable only by the parties to that transaction.
- The Xanadu client–server communication protocol is an openly published standard. Third-party software development and integration is encouraged.
The 17 rules, I haven’t looked into them previously, I suspect that they’re in part influenced by the attempt to operate Xanadu as a commercial business. For example rule 1, why does one need that? Only for transcopyright I assume, and there are serious issues with that of course, I didn’t find resolved anywhere yet. Now, IPv6 addresses, MAC addresses and domain name resolving can all be changed, so the unique identification is only a probability, not relying on certificate cryptography and signing, HTTPS fortunately provides that for some servers, if there’s also some pinning going on, but can be done today. I don’t know how blockchains or Bitcoin would help with any of that, probably just added for some buzzword bingo. In general, the network is some sort of Turing test. You connect your computer/terminal/screen via a socket/cable in the wall to the outside world, you submit messages and get responses back, but how can you know that the network and outside world actually exists? Some anti-virus approaches fool a virus for analysis purposes by simulating an entire intranet full of people who send mails to each other and do things, so the behavior of the virus can be observed, while none of the people, services and network nodes actually exist, it’s all simulated on the network interface.
Rule 4, not so sure if that really exists. What about creating documents? How would a ordinary user today create a new document? On a server, yes, he probably doesn’t have his own, so one would become entrapped in Google Docs or Xanadu, that’s a very poor version, worse than creating a paper document. Little bit better with pastebin. Great, for 2018. Searching? Not really. Google searches, and Google finds, and it only shows at all and in the order it feels like, and not what actually has been searched and could be found. The options offered are very, very limited. And we can’t improve on it, because most of the web is unreadable data trash that needs a lot of code to make at least some sense out of it.
Rule 7: Erm…how?
Rule 8 is stated as a matter of fact. No publisher or content provider can prevent people from creating “links” after they’ve published.
Rule 10, one could claim that this exists, because that’s what the URL (+ canonical by extension) is supposed to be/do. Sure, most people misuse it, so we face some issues that Xanadu or whatever system might be awesome if operated with some discipline, but a lot of people are just sloppy, and we didn’t manage to enforce some of the rules, which is why a new, better system will need to be different from the web, and a lot of people who are used to the breaking, sloppy paradigm of the web won’t like a more rigid system (but those who do like, benefit a great deal from it — I guess a global digital library system isn’t for everyone, that’s a big difference in thinking that Xanadu could have become the web. Maybe, to some extend, but some people would have liked to do other things with it than Ted anticipated/anticipates).
The issue with rule 13 is that for transcopyright and transclusion reasons, I think Ted assumes this to be all within Xanadu servers/system. Otherwise, there’s CDNs and what not.
Rule 17, the free software movement didn’t exist at the time (on the other hand, all/most/some software was free per default). Online repositoriy hosting like SourceForge/GitHub/GitLab is fairly new, and Ted observed the “Open Source” community and of course didn’t like their lack of direction and design, so what can he do? There’s also a difference between the protocol and his actual software implementation. The protocol being published and open (don’t know if it’s actually published as a spec or in Literary Machines or somewhere else), allows others to build Xanadu-compatibles (besides the trademark and that Ted would have hated it for business reasons), the protocols are probably more out there to encourage adoption/integration with existing third-party software, not to build compliant server alternatives. Also, again, Udanax Green is out there.
Rule 18, don’t forget that Ted is also big on hypermedia. Doug/Ted interplay and differences is an interesting topic for study in itself.
Rule 23, agree, but consider the implications.
For the conclusion, yes, things can be cobbled together, but part of the beauty of NLS and Xanadu is that those systems are integrated in terms of that all the parts/components are designed to play nicely with each other. I don’t have answers how to do this, but a few ideas how it could work.
Questions of transclusion, problems with copyright despite cheap storage, nonsense of PDF, not to discuss all of that here.
Agree that the business and copyright aspects are a big problem for Xanadu. Micropayments is fine, but enforcing them or copyright or transcopyright is in conflict with the nature of digital and immaterial goods. It would be a nice world if we could all agree and make it work, but reality is never that simple.
Building the Missing Pieces: it’s not only to build what’s missing, they need to be build “right”, also the things that are existing might not be “right” enough, and then a user needs to be found for it (not to call it a “market” at all, not to think about a “salable product” at all either).
Canonical Metadata, Ted is also promoting pluralism, so the one true version might not refer to a globally canonical one, but the one and only true version you’re linking to (to not wake up one day and find your link pointing to a different context, meaning, text). Correctness is somewhat difficult, see Encyclopedia Britannica vs. Wikipedia. Some clarity can come from the statements that somebody published at a particular point in time. Legal demands and abuse create issues for that as well.
I of course disagree with RegEx and snapshots.
Streamsharing, then the question is how to merge several conflicting individual variants, if that’s a thing in Mimix/streamsharing.
I don’t think that word processing technology from the late 70s or 80s is available any more, I believe it’s lost and abandoned.
So are you saying Mimix is expensive and limited to big business owners at the moment? 😉 Also, streamsharing sounds like one capability in Engelbarts jargon, or maybe whole systems could be built with it, but yeah…
And here’s my reply…
Thank you for the well thought-out reply and for your willingness to take the time to look at what I wrote and answer it honestly. I want to give you the same consideration which is why my reply is so late in coming. I’ve moved my home and business from Key West to Texas recently which has consumed more of my time than I ever thought possible.
Because you were so detailed in your reply, I want to give you the same courtesy and reply to each of your comments individually.
You are absolutely correct that market attention has moved away from text processing applications since word processing became ubiquitous. In my estimation, this is a grievous error. In practice, much of the information we use in daily life is textual. Everything from what’s presented on the evening news to laws that we adopt is, essentially, textual. At first, it was thought that simple word processing applications would be the end-all, be-all in the text domain and take care of everything we need. But in 2018, that’s proved to not be the case. In this era of fake news and rampant manipulation and abuse of public information, it’s more important than ever to be sure that one’s writing is entirely factual and can be backed up with reliable sources.
In reality, it was always this way for serious journalists and academics. One of the primary goals of Mimix is to bring reliable, verifiable information to all levels of writers — from high school students to book authors. The second goal is to do so in a way that preserves privacy and choice and doesn’t lock users into a platform that wants to monetize or control their content. As the founder of The Mimix Company, I feel these ideas are revolutionary and the time has come for software that embraces and enables them.
I’ve heard both that Ted saw NLS and that he didn’t. I’m pretty sure he claimed at one point that he had seen it. We also know that Ted thinks that he invented everything and tends to be very disparaging about any other internet or UI pioneers, so we have to take his retellings with a grain of salt. I mention it in the caption not as a slight to Nelson but rather to give credit where credit is due to Doug.
You’re absolutely right that there is no such thing as network security — or any computer security, really. It was the first thing IBM taught us as mainframe programmers: “There is no computer security, only the appearance of computer security.” For my purposes, what we want is authentic documents and sometimes knowing where they came from is one way to achieve that. But you’re right, Ted’s purpose in rule #1 was probably to support his publishing and copyright business ideas. You’ll find no buzzwords in any of my writing. I mention the blockchain because it does represent an authenticated source for digital data and one that can be accessed and shared repeatedly while still remaining verifiable. I added the word Bitcoin to the sentence to connect the idea of the blockchain to its most familiar implementation. There is no token economy or cryptocurrency aspect to Mimix.
Regarding rule #4, I’m not going to recreate Xanadu or use Ted’s document or navigation structures in any way. I think those were actually awful. What I meant by creating, storing, and searching is that we all do this everyday with regular research and writing tools. We don’t need “Xanalogical documents” to get to Xanadu. There are many corporate and academic data silos that could be opened up with a tool like Mimix without expecting the authors to get involved or to rewrite them.
Rule #7 (These are Ted’s rule for what *he* wanted — not my rules.) is something he proposed but didn’t deliver. As I mention later in the text, this idea is ludicrous on the surface. The Declaration of Independence should not be two-way linked with every school child’s paper that’s written about it. Having said that, it would be very helpful to have two-ways links between your *own* documents such that you could open a research paper or book and immediately see what you had written about it. That would be tremendously helpful and it’s a key feature of Mimix.
On rule #8, indeed. It is up to each writer to decide how much he will steal or borrow from others and how much credit he’ll give them in return. This is not something we’re going to take up with Mimix. However, we do see a strong use case for Mimix inside organizations where their own proprietary data needs to be accessible to a wide but restricted audience. Mimix makes that data available to all of the people inside the organization who should have it, so that when they write and publish facts about their business, they’re drawing from the official, canonical facts.
Rule #10 definitely does not exist and URLs aren’t going to cut it. I’m talking about your individual, everyday writing. Who names the scanned email you just received? Who picks a name for the notes you just wrote? Who comes up with different titles for the 27 versions of your novel as you progress through writing it? No one. At best, today’s software gives files stupid names like Untitled Document 3 or Backup. In other words, the computer is of no help in organizing your own writing and your own documents — even though it’s omnipresent during that writing. This is just retarded. Microsoft Word starts out with a blank sheet of paper and Untitled Document metaphor because it’s trying to emulate a typewriter. We can do much better.
Rule #13 is really about housekeeping. Ted was worried that everyone trying to access the same copy of a Xanalogical document would crash the server. In Mimix, you have your own complete copies of all the documents you reference. They’re all on your own machine or on a network device you control. So instead of worrying about a server crashing we are looking at this from a housekeeping standpoint of making sure that, if the canonical copies of your documents change for some reason, you can get the new versions easily. A corporation’s employee handbook is a good use case. You can download or lookup a copy when you’re first hired, but nothing tells you when you refer to it that the copy you have is outdated and a new one has replaced it. Mimix can address this problem.
With regards to free software and rule #17, the entire computer industry started with free software. At the beginning, all software was free and provided by the computer manufacturers for two reasons: First, it sold hardware. And second, the company’s staff were often the best people to write such software as knowledge of this new domain was naturally limited. IBM only started charging for software later in the game. With the advent of personal computing, the cycle repeated and free software was first. Some of it came from manufacturers and much of it was posted in magazines, on bulletin boards, and shared at floppy swap meets. So I’d argue that we’ve had free software all along. One thing surely isn’t in doubt and that’s that free and open source software is the future of the entire industry.
On number #18, I think Ted was just shortsighted. He’d worked with paper documents that contained multiple and unique views for data. His own books Computer Lib/Dream Machines are extreme examples of text and graphics formatted together in unusual ways. And despite literally writing the book on this method, he never mentioned that Xanadu would have any of those capabilities. His favorite demo of Xanadu is parallel reading of two passages of text with interconnecting lines. That’s it. It’s not a very advanced view or a very nice way to read and write and we won’t be adopting that UI for Mimix, lol!
You’re so right about #19. The profusion of textual hypermedia has made things worse for formatting, not better. SGML was invented at IBM and is the precursor to both HTML and XML. IBM had better control over document formatting 30 years ago than we have with Google Docs today. We’ve gone backwards. It will be very hard work to fix this but it’s something I believe in strongly. The foundation of Mimix is a domain-specific language called MSL, Mimix Stream Language. In MSL, data is represented atomically in a write-only storage system. Formatting and edits are kept separate from the existing text. The task of actually creating a visual layout, then, becomes one of executing MSL on the viewer’s device. Mimix will initially ship with a rendering engine and an HTML-based user interface. Since the product is open source and the MSL language is simple and will be fully documented, we expect many people to create alternative interfaces and to continue to improve them.
On privacy and rule #23, it’s sadly ironic that this entire industry rests on the accomplishments of a cryptographer and data protection was the original job #1, but today few people really have data security. It’s up to software developers to build systems that benefit the user and not just the platform provider. With Mimix, it’s easy to share your data with anyone else that you choose, but impossible for anyone but you to get to it otherwise. Even The Mimix Company will not be able to access any of the documents anyone views or creates. This is by design as we are in the open source software business and not the spying or advertising business.
Cobbling together applications today is de rigueur. The industry prefers the term “code reuse” but it’s all about moving software higher up the food chain and abstracting away details, even for programmers. Every piece of software we use today is built from some aggregate of other people’s frameworks, and Mimix will be, too. It’s better for us to focus on the secret sauce that only we can deliver than to spend time recreating things that already work. This is not to say Mimix is a DIY toolkit. Quite the opposite. The necessity to DIY everything in the open source world is one of it’s greatest failings. To the user, Mimix offers a fully integrated environment with a single multi-platform installable. The application is fully supported and documented by us. But, under the hood, there are many standardized pieces as well as custom code.
There was never a market for Xanadu as conceived by Nelson and this has been proven by history. Mimix, however, is not a product in search of a market. It’s a solution to a problem that I, personally have, and so do millions of other people. Whether you write creatively or professionally, most people still rely on written communications. Some of them are content creators and even more are content consumers. Mimix is a superior tool for all of them. Mimix is also entirely free to use and to remix without restriction, something Nelson would never have agreed to for Xanadu. (Where’s the “lib” in his Computer Lib?) So we have no intention of selling the product; we’re giving it away. Because it meets a widespread need and has zero cost barriers to entry, we think Mimix will be quickly and rapidly adopted by many different kinds of users.
You are right that pluralism and document versioning are imperative. But instead of hiding your source materials behind your writing, Mimix makes it clear which version of reference materials you used. The writer is free to choose the sources of canon that he or she likes, but those choices are visible to the readers, too. The readers can also apply different canonical sources to the original author’s text and immediately see how the “facts” of the document are changed. My favorite example is Who shot JFK? Which sources would you choose as canon?
What part of my RegEx comment do you disagree with and why, of course? I’m truly lost here. It’s a horrible parsing language and I hate it. It does, however, work. I mention RegEx in contrast to Nelson’s tumbler system which was entirely unworkable. I’d much rather write RegEx to find some part of a document stream than to try and wrest it out of Ted’s tumblers.
I’m glad you asked about versioning canon in streamsharing. It’s not only a thing, it’s foundational. A reader *must* be able to see the sources the author chose in parallel with the author’s own writing and with this new reader’s own writing, as well. Anyone doing serious research will undoubtedly want to have multiple sets of canonical documents available at once. One of the awesome things about the Mimix UI is that it will make these differences easy to see and adjust as you work on your own writing. No other type of product has ever been released with this kind of capability. It’s why our slogan is Think Better.
I would say that word processing technology from the 70’s and 80’s is really *all* we have. The last great improvement in word processing was spell checking and that’s from the 80’s. Microsoft Word hasn’t changed *what* it can do, only how it does it. Today’s word processing, blogging, and research tools don’t actually add very much smarts to your writing. They’re kind of dumb. They don’t know what you’re writing about or where you got your facts. They don’t help the next person, the reader, in any way — even though nearly all writing is meant to be read by others. I would say that the word processing industry hasn’t advanced at all since it first came online. And that’s sad.
If you have a campus of 10,000 students, Mimix makes terrific sense as a way for teachers and students to collaborate. If you have a company with 30,000 desktop PCs, there are definitely some documents you’d like everyone to have *and to use correctly* in their own work. So these enterprise users, the earliest users of word processors, will also be the biggest early adopters of Mimix, we think.
Yes, streamsharing is definitely inspired by Engelbart’s demo, although what he showed was more like live collaboration on a shared document. Streamsharing is different in that each reader has his or her *own* copy of the original document, along with the reader’s notes, original writings, and other reference materials. With Doug’s demo, as with Google Docs, everyone is editing the same file. With Mimix, you’re editing your own file.
Thank you again, Stephan, for your well considered comments. I look forward to further dialog with you. When we’re ready for a closed beta, I’d love to have you be part of it. I’d also like to publish this thread in my blog so it can spark further participation from the community.
And Stephan’s further reply to me…
You are absolutely correct that market attention has moved away from text processing […] much of the information we use in daily life is textual […] fake news and rampant manipulation and abuse of public information, it’s more important than ever to be sure that one’s writing is entirely factual and can be backed up with reliable sources.
Text has some very beneficial features. I don’t think that a world entirely relying on audio recordings and commands (probably more like utterances/noises and less spoken language that tries to be compatible with text) is impossible, but it’s not here yet. There are many great, new approaches to address the solving of complex, urgent problems, but without good tools for text, which should be much easier to build than solving complex problems, I’m a little pessimistic that we’ll be too successful with that, if we can’t or don’t want to address the simple things as well and cripple ourselves in that respect.
I don’t think that fake news is a category of particular usefulness. As if there’s true news, as newspaper publishers want to convince us. Journalism, especially good, quality journalism, is in a crisis, but that too has a lot to do with what and how people want to read, and the tech of course. Good journalists have to decide what to report on and what not, what narratives and orders to construct, who can and is going to check the sources anyway, and they’re biased just as well. The reader generally lacks sufficient media competency, that’s not taught at school, wonder who’s responsible for that. The social processes encountered around it, if people fall for it or are operating with media on that intellectual level, the problem isn’t that fake news exist and that some can’t detect it, it lies elsewhere I suspect. Don’t have really an answer/solution for it, would be some work to look into it.
> > […] One of the primary goals of Mimix is to bring reliable, verifiable information to all levels of writers […]
That’s great, we can never have enough or too much of that! Not going into a lengthy meta-conversation here, I would assume you know already about the many questions and difficulties associated with such a goal.
> > I’ve heard both that Ted saw NLS and that he didn’t. […] so we have to take his retellings with a grain of salt.
Sure, I don’t disagree here in regard to historic accuracy, but towards if it is relevant. Ted certainly got some early ideas about things appearing on screens from the movie-making of his parents, Doug from the radar screens. Oscilloscopes and TVs were around for some time. It surely is genius of both Doug and Ted to arrive at the idea that a computer could generate these images, but I wonder if it matters that much what was first by a few years or so, as the general idea has been with some a little longer. One major difference is that Doug managed to actually build it technically and projectwise/organizational/funding (but so did the radar equipment manufacturers, just with analogue electronics I assume) and others like Ted didn’t have other options than doing mockups.
I’m all for crediting Doug, one could say that he produced a proof-of-concept that made it a lot easier for others to accept the general idea, as the 1968 demo famously inspired most of modern approaches to computing. Ted certainly wasn’t able to do that with his mockups and if he were, one has to ask how the differences to Doug would have produced a rather different outcome.
> > […] I mention the blockchain because it does represent an authenticated source for digital data and one that can be accessed and shared repeatedly while still remaining verifiable. I added the word Bitcoin to the sentence to connect the idea of the blockchain to its most familiar implementation. […]
Makes sense, just want to find out what you’re proposing, building or interested in, in terms of technical substance.
> > Regarding rule #4, […] is that we all do this everyday with regular research and writing tools. […] There are many corporate and academic data silos that could be opened up with a tool like Mimix without expecting the authors to get involved or to rewrite them.
Makes sense too, it’s some kind of “rescue” operation to make the body of material we already have actually useful. I’m on that with digitalization and proofreading attempts + semantic annotation for rescuing stuff that’s pre-digital, the born-digital material needs similar rescuing (or could be called “augmentation”) as well. I’m just more recently in the camp that worries about new stuff we write digitally today, is it going to be more material that will need rescuing later in lack of proper writing tools and methods to organize publishing (for example, avoid/prevent silos to begin with), is the pile just getting bigger, or can we do something that Mimix isn’t needed for new stuff, only for our legacy collections? Might be contrary to your business/project, I understand, but in my mind, both are needed and could/should work together beautifully to bring about a better digital future for all of us.
You must already know (about) Frode Hegland, he’s into this sort of thing. He’s more concerned about “documents” (academic ones), I look more at text tools that are agnostic to their context of use and can be adjusted/incorporated, but it’s more along the lines of universal cultural techniques or general, common capabilities a knowledge worker or basically everyone could use and benefit from, no matter what the task at hand is and no matter if no specialized, specifically tailored application exists for it, but from generic building blocks you could get close to it even without a dedicated, specifically designed solution, that’s the plan and difference at least.
> > Rule #7 (These are Ted’s rule for what he wanted — not my rules.) […] The Declaration of Independence should not be two-way linked with every school child’s paper that’s written about it. […]
Sorry, was already worrying that my “Erm…how?” would not be sufficient. Sounds like in your mind, we can’t get to Xanadu for this reason (potentially among other reasons). I don’t think it’s impossible, if links are published as independent resources that don’t require the cooperation of the original author or server/provider of the base text, so that they can be found/crawled to compile a list of the incoming links. In addition to that, clearly something organizational needs to be in place as well, maybe what the “federation” movement is proposing, see Ward Cunningham’s Federated Wiki for example. People could organize the links, and you could filter if you want to only see those approved by the author of the piece, or those recommended to you by friends/colleagues, what other readers voted up, or any metric or combination of those. Sure, it’s much more work, but it could also provide invaluable benefits.
If you were proposing that we should get to Xanadu, my “how” simply asks about how you would go about it technically, the “erm…” supposed to indicate that some serious consideration is needed to come up with a concept that could work. It’s not something we can easily demand/build (organizational, methodological), that’s for sure.
> > Rule #10 […] I’m talking about your individual, everyday writing. […]
Oh, yeah, sure, I see. Doesn’t need to be globally unique until the act of publication, and I too have some kind of infrastructure in mind that would handle/organize the many resources, connected with a bootstrapping of semantics, similar to the OSI reference model (just for applications and the system to “magically” know what’s the right thing to do).
> > Rule #13 […] In Mimix, you have your own complete copies of all the documents you reference. […] but nothing tells you when you refer to it that the copy you have is outdated and a new one has replaced it.
Totally agree, I’m big on local copies as well for reasons as offline use, avoiding 404, etc. I also would want to be able to legally publish them again, so if the official source vanishes, I need to have permission to distribute it again, to others in my circles if they want to look at what I was working on or publish my thing (and then what is it worth if the sources have vanished), and to offer that to the entire world again as a posterity feature to prevent loss.
For the indication of updates, I found some earlier work on that, for academic PDFs. I could look it up again if it’s of any relevance/interest.
> > […] So I’d argue that we’ve had free software all along. One thing surely isn’t in doubt and that’s that free and open source software is the future of the entire industry.
It’s indeed quite difficult and questionable if copyright should apply to software, only with a lot of work and persuation some parts of the industry (and publishers as well for their somewhat different mechanics) was able to establish and maintain the notion.
> > On number #18, […] And despite literally writing the book on this method, he never mentioned that Xanadu would have any of those capabilities. His favorite demo of Xanadu is parallel reading of two passages of text with interconnecting lines. That’s it. It’s not a very advanced view or a very nice way to read and write and we won’t be adopting that UI for Mimix, lol!
Very true, that’s probably because of his artistic background and ambitions in contrast to a technical engineer (which is what he and others lament, the techie perspective on things), trying to create pieces of art, which then can passively consumed as experience, but I care more about things that can do things. Especially the hypertext pioneers (including the modern ones) writing and publishing books in the traditional way is totally annoying me.
The demo(s) for parallel view, you know, I have some background on that (diffs in software development, comparison of the development of texts, philology) and when looking into his demo, it only works this beautiful because it’s carefully constructed/curated. Not that more messy comparisons invalidate his demands, I wouldn’t dismiss any of it and are quite glad that he made and continues to promote them, for the reason that I don’t have to come up with such stuff myself and have something out there I can point to, not being alone or to have to do the advertising for the vision, which would be horrible if I had to do it. Regardless of the alleged primitiveness or issues with making it real, I think there are important insights to derive from what Ted does.
That also goes for hypermedia, he made some early contributions to raytracing (but so did van Dam for computer graphics), filmed, does his audio recording all the time and surely thought/thinks about how the computer can help with that. There’s just a limit how much one can do at the same time and with the necessary depth to make it substantial, and without the technical ability/interest he’s somewhat limited in terms of what his demos can become in comparison to those of others, but his intuitions and designs are always worth a look, and be it precisely because they offer different perspectives and challenges to the status quo we otherwise would all too soon and willing to be statisfied with.
> > […] MSL, Mimix Stream Language. […] Since the product is open source and the MSL language is simple […]
Do you have already something out, did I just miss it? Not that there needs to be, “open source” can also come to the user/me when buying the product, and of course it could be too soon, not ready yet or something.
> > […] not the spying or advertising business.
The bad part might be that law enforcement might make demands, and if there’s no target in general libre-licensed technology and widespread use, they can’t do much about it, but companies of course are registered and operating in a legal juristiction, and that’s how law enforcement can demand the compromization of user privacy/confidentiality. Also, if users use the system to peer-to-peer-encrypted sharing of data, there could be demands to delete/prevent certain instances of it or liability for “facilitating” it, mainly because politicians don’t care or know how the tech works, and there can be social/societal demands in whatever constellation of what needs to be enforced or is acceptable or harms image, etc.
> > […] The necessity to DIY everything in the open source world is one of it’s greatest failings. […]
What can you do if those who have solved it already don’t want you to use it, or try to restrict some of the people or groups, so the option is to either not care as long as you’re not affected and leave the other poor guys to their fate, or solve it once and for all for everybody.
I meant “integrated” in terms of a coherent design/interface, or components optimized/adjusted to work with each other dedicatedly, in contrast to the friction/unnecessary inefficiencies of non-integrated systems that were glued together and only sort-of, barely or good enough work. Not that it must be a particular approach in my mind, but I guess we prefer things that were specially, dedicatedly made to help with the task, on the other hand standardization/generalization is a nice effect as well.
> > (Where’s the “lib” in his Computer Lib?) So we have no intention of selling the product;
Heh, that’s a historical, nostalgic, now conserved notion of liberation, liberation from the big companies who control access to the computers of the size of a room, and the modernized translation by Ted is that it is now Facebook and the web that restrict what people can do.
> > What part of my RegEx comment do you disagree with and why, of course? […]
Not of strong importance, but I in general don’t like the use of cryptic symbols with special meaning (also in programming language, overload special characters is just a bad idea when there are names and keywords for constructs anyway). It’s great if you know it well and then it’s much shorter, but usually we use words to express/hint what the thing might do. Who knows if the people in the gaves a few thousand years ago were basically doing some RegEx on the wall 😉
> > […] One of the awesome things about the Mimix UI is that it will make these differences easy to see and adjust as you work on your own writing. […] It’s why our slogan is Think Better.
Do you have a plan or implemented already how to make that visible (render/visualize it), and how a user would work with it interaction-wise? Apologies if I didn’t read the other articles on the blog yet, it might be explained there somewhere. Will look into it.
Anybody can think different 😉
> > I would say that word processing technology from the 70’s and 80’s is really all we have. The last great improvement in word processing was spell checking and that’s from the 80’s. Microsoft Word hasn’t changed what it can do, only how it does it. Today’s word processing, blogging, and research tools don’t actually add very much smarts to your writing. They’re kind of dumb. They don’t know what you’re writing about or where you got your facts. They don’t help the next person, the reader, in any way — even though nearly all writing is meant to be read by others. I would say that the word processing industry hasn’t advanced at all since it first came online. And that’s sad.
Yes and no. In some aspects yes, Word and the paper simulation and all of that is of course stuck on what was pioneered as the “modern wordprocessor” as we know that category of software today. I’m, personally, not referring to smarter features either, of the tool to know what the piece is about (towards AI/reasoning/targeting) or text/language-based advances of that era (dictionaries, recommendations, what they were doing in lack of graphical interfaces), but in innovations on how to operate/process (“process” in the old meaning of the term, analogous how they “processed” data) text. For example, there are no good XML/annotation editors that would help to structure text. We have big and powerful and expensive XML tools, but are in no way flexible enough to easily design special-purpose interfaces that would allow markup/annotation for very specific tasks, it’s difficult to visualize the annotations that are there already, one can easily mess them up and all of that. Or old word processors had cursor markings that would allow operations to be enacted on a span of text, currently I have to select/drag such spans with the mouse, always being at risk to accidentally loose my selection, with difficulty to correct it if I don’t get it perfect the first time, in an interface that confuses/mixes many different matters of concern such as typesetting/layout, so things break if I move or split or change any of the underlying text by such methods. And then there are potential innovations around those things that would be entirely new.
Some of the dedicated word processor machines (potentially analogue) and early hypertext systems had some of it, so there was at least the option or general notion, but today, all of that was lost to the mainstream paradigm, and now we have a lot of people creating new material or messing with existing material with improper tools, that are a pain to use and create a lot of unnecessary extra work. Not that the earlier times were perfect, but I hope you get the idea.
> > […] Streamsharing is different in that each reader has his or her own copy of the original document, […]
Great, great, great, I’m in that camp as well!
> > Thank you again, Stephan, for your well considered comments. […] I’d also like to publish this thread in my blog so it can spark further participation from the community.
Thank you for replying! I generally tend to look into things, as long as somebody cares, I also engage in whatever can be reasonably done. If you intend to publish this conversation (if I understand correctly), I would like to curate it so it becomes more convenient, maybe an example/experimentation about the things we’re talking about (more hypertexty), maybe to develop tools that help with such activities, but maybe Mimix is already there and you can use the text/conversation to apply Mimix to it. And there’s the question of licensing of course, which I would like to be libre-free (for people to create their derivations, translations, print it, sell it, etc.), where another question would be what that means for opinion pieces (prevent derivations to prevent misrepresentation or some of these concerns one might have), per default I usually would pick Creative Commons BY-SA 4.0 despite of it’s problems for such kinds of works. Do you have an aversion against the legal stuff and don’t want to bother, or is it an opportunity to practice and try a few things, for example, by libre-licensing me permitting you to do anything you want with my part, as long as the result remains libre-free?
My takeaway from this is that we’re lucky to have such passionate and well-informed people taking an interest in our product at this early stage. In open source, community is everything. Second, and I’ve seen this from several people’s comments, it seems that I’ve left the impression that Mimix is actually recreating Xanadu or using any of it’s models. Let me correct that emphatically. Mimix is inspired by computing pioneers, including Ted Nelson, but we are not using any of his methods, either for internals or for the user interface. What we are taking from Nelson is the inspiration and commitment that the world needs better tools for research and writing. We like to think that Mimix offers many of the most desirable qualities of systems like Vannevar Bush’s Memex, Doug Engelbart’s NLS, and Ted Nelson’s Xanadu, but we do it in a modern way with up-to-date technology and tools that fit the way people read and write today.
I’ve also noticed while trying to piece this post together how horrible the whole process is. The formatting of this “final” document is just awful and would require a lot of work to get into something better looking. It required a lot of work to get to this stage! Stephan laments this, too, when he mentions curating the content and adding hypertext links.
I do want to address a couple of other areas he mentioned while I’m here in a public forum:
Text contains information that audio does not, most importantly a visual component. While most people “read aloud” to themselves and translate text into speech when they read or write, there is the additional benefit that a body of text takes on an organization and a visual layout. How often do we remember that something we read or saw was “about this far” into a story or “at the bottom of the right page?” All the time. This visual and chronological referencing ability of text gives it lasting value, even in the Alexa age, for audio information cannot be so easily indexed. Stephan is right (along with Vannevar Bush) that we need better tools for text if we are to solve complex problems using it.
Fake news is what we used to call bad journalism. Specifically, it’s bad short-form journalism because fake news makes into a long form less often. Nonetheless, even with book-length works, fake news or poor information sourcing and lack of transparency impoverishes the quality of the written word we consume today.
While there are certainly some “smarts” to the internal operation of Mimix, this particular aspect of veracity doesn’t require any rocket science or AI to achieve. An author simply chooses his or her sources, which are revealed to and usable by the reader. The example I like is The Louvre. If you were writing about paintings in their collection, their data could be assumed to be correct. Even if it isn’t or an outside source disagrees, at the least both the reader and writer have some clarity about where the facts came from. One of the smart things Mimix does is tie together the references from the Louvre when you, the author, choose to refer to them in your writing. Then we share those same references with your readers. Both the research/writing and reading/research sessions happen in the same environment, and this is what makes them so valuable.
Any researcher worth his or her salt will have to consult multiple and sometimes conflicting sources. In writing about the nuclear industry, I’ve found it very useful to compare the industry’s explanation of the facts (of an accident, for example) with those of outside journalists or individual writers in contra. Often they report the same facts with different numbers! How valuable to me in my own writing it would be to see where I got each (or all) of my numbers and to be able to change those references while I’m writing and have the system keep track. Using the same system, the reader can choose his or her own references for the same facts, which would affect that person’s reading of the document. This is a new kind of intellectual interactivity and “collaboration” that takes place individually while creating your own materials.
With regards to the development of graphics technology, neither Ted nor Doug invented those things but they did envision using them in novel ways. Ted wrote about his fascination with digital graphics in his books. Doug, as you say, had access to SRI’s military graphics tools. Both men were already working in the digital domain and well past analog computers, but their digital systems had slow processors, small memories, and limited storage. Hat’s off to Doug for trying to turn these into a UI which inspired the Xero Alto (where he later worked) and later became the Mac and Windows graphical user interfaces known today. That stuff is directly traceable to Doug. I give more credit to Ted, though, for thinking about how the information itself would need to be organized and accessed. He’s into documents and their related meanings. Together, both men (and both inspired by Vannevar Bush) gave me the impetus for what Mimix is becoming today.
Stephan mentions the ability to “rescue” old documents from their paper silos and proprietary systems, and this is absolutely a goal of Mimix. We can’t expect all the world to change their writing formats, especially of existing data, and The Mimix Company will offer services around this need. We do, however, hope to inspire writers of new content and new projects to consider starting with Mimix so that data doesn’t need rescuing later.
Regarding links… this has become a topic of confusion because of Ted’s use of the terminology. Mimix does not have links. Mimix has only atoms, views, and streams — which are sequences of atoms and views. Think of a machine that, instead of compiling what you do, simply records it, ala Bush’s Memex. There are a million tools for compiling a stream of text into something else — anything from the neighborhood map of errands that Engelbart shows in his demo (I love that he has to stop by the library on the way home.) to a complete biography of everyone mentioned in a book. Rather than linking to text, we simply include it in the stream. Some of the text we read or write may be considered canonical, and this is connected both ways with the text that refers to it. We don’t call these links because they are interchangeable with other canon, all of which is recorded in the stream!
On patents… Yes, I’ve crossed over. After writing two myself, seeing Google and other large companies use my work (at least some of them cite me, ha!), and not earning $1 from my most important patent, I’m skipping that bullshit with Mimix. Stephan points out that software publishers use “different mechanics” to make patents work. Is that ever true! The different mechanics being lawsuits and patent sharing agreements. Together, these interlocking ideas ensure that only the largest companies with the biggest legal war chests can play the patent game.
We are not yet ready to show our developments in MSL or in our viewer/editor code, but I promise there will be a Github when we are. I’m excited to get feedback on the MSL language because it’s the foundation of everything that we’re doing.
Stephan brings up an important part about privacy and that’s fundamental to our design goals. As he points out, free and open source software can be run by anyone on his or her own software so The Mimix Company isn’t involved in who uses the software or what they do with it. Even more important, the Mimix software will have baked-in encryption which even we cannot open. The only person who has access to what you read or write with Mimix is you and anyone you allow. While this places additional custodial oversight on customers (we can’t help you if you lose your keys), the benefits of not being able to help others who might want your data outweigh the extra efforts needed to protect your keys.
Mimix provides an integrated environment in two ways: First, our software will fully comply with all our documented standards for the MSL language (obviously). It will read and write valid MSL code and not use undocumented or proprietary file formats. This makes it easy to use our tools for any work you might do with MSL in any project, even using a different editor or viewer. You can still use the tools we provide without breaking anything. Second, the Mimix editing and viewing environment is fully self-contained and runs identically on Linux, Mac, and Windows. All aspects of reading, writing, organizing, backing up, etc. take place within our custom interface, which is specifically designed for correlating large and complex sets of data.
The Mimix UI will draw from much of my previous work in user interface design, semantic data, and domain-specific languages. It will be the first application of what I call Nine Eyepoints Design, a technique which builds interfaces around the spatial locations that we use in our heads (above, below, etc) when organizing every day stuff. I’m looking forward to talking more about Nine Eyepoints Design in a future blog post.
As always, my inbox is open to you, too, dear reader. Write me at email@example.com and let’s continue the conversation!
November 22, 2018
Thanksgiving is almost as famous for the dinner conversations as for the dinner itself. This year, while discussing geeky stuff with my cousin, he asked me about the latest Intel Core i9 processors and how they fit into our hardware plan. I told him that I like to buy the fastest processor I can get at the time because it lasts longer. In my experience, the very best equipment costs about half again as much as the next tier down but performs well far longer. I said, “Most people are on a three year upgrade cycle but with this approach you can get to a…”
“Five year cycle,” he said, as we finished the sentence together. Xavier was a VP of Operations for a major worldwide airline. He knows a lot about procurement cycles and getting the most out of expensive hardware. And PC hardware is expensive, there’s no doubt, which is why many companies avoid spending what they should on IT.
When I was a mainframe programmer at IBM’s Santa Teresa Laboratory, we had a zero-day upgrade cycle. You could get anything you needed immediately. IBM operated a PC Store on campus just for employees. With nothing more than a manager’s signature on a tiny scrap of paper, you could stroll into the PC Store and get whatever you wanted. There was no drawn-out requisition process. I didn’t even have to specify a model or a price. Your ticket would say something like “PC Desktop” and a signature and you’d just pick out what you need. Needless to say, this was a geek’s dream come true. No having to explain your equipment requirements to people in suits. No arguments about how much it cost to have good tools.
Now you may say, “Well, IBM has a lot of money and they make computers and they could afford to do that.” Of course, those things are true. But there’s another reason for IBM’s generosity when it came to computer hardware and software. It’s because the company figured out that such an approach would actually save them a huge amount of money.
You see, IBM is very studious. They don’t do anything without having a PhD chart it out first. Years ago, they asked some smart guys to see if the computers and software tools available to programmers made a difference in their productivity. They used a simple metric: the responsiveness of the computer vs. the number of lines of code minus the lines of errors that the developer produced. Using their own worldwide network that predated the internet, it was easy to measure the computer’s responsiveness — down to the millisecond. Likewise, it was easy to correlate every programmer’s output with how many lines of ‘good’ code and how many lines of errors he or she wrote.
If you’ve ever written code, their findings won’t surprise you. Every millisecond of delay in the developer’s computer was costing the company money in reduced output and increased errors. With what IBM was paying programmers then (and what they cost now), it was simply cheaper to make the computers as fast as humanly possible in order to allow the developers to be as productive as possible. When programmers leave the flow state, as Mihaly Csikszentmihalyi calls it, there is a time penalty to recover. Any delay in the hardware or software could result in losing the programmer’s attention and taking him or her out of flow. Hence their willingness to let us have any tech tools that we needed for our jobs.
Taken from a 30,000 ft. view, this concept applies to everyone in your organization — not just programmers. In an earlier post, I alluded to a brief stint I did at the Sprint store. Let me tell you, Sprint has shockingly bad IT. Shockingly. Bad. It’s clear that no one at Sprint has studied how much time their employees spend fighting with the company’s outdated hardware and software. It’s time they’re not spending on getting new customers — or making the current ones happy, which is even more important. Anyone who watches Undercover Boss can tell you the same thing. Old, slow computers and outdated software waste your employees’ time but, worst of all, they hurt your product and your customer service quality.
In addition to making the programmers more productive, having better computers made working there a lot more fun! I’ve always thought that one of the best perks of being in the computer business is that you get to have kick-ass hardware. It’s quite possible that the enjoyment of having performant hardware and software contributes just as much to programmer productivity as response time. People should have the tools they need and the tools they want in order to do a great job. At IBM, this required a $40 million mainframe which they also let me access from home. How many nights I spent volunteering to learn about work hardware and software because it was simply awesome — versus being frustrated all the time with crappy tools and trying to avoid them. Today, I use a $2,000 laptop for the same reason.
IBM had another brilliant way to spread tools around to those who needed them: an early internal version of eBay where anyone could post hardware they didn’t need and any other employee could take it from them. All that was needed to complete a transaction was the signature of the manager in the listing department and the signature of the manager in the receiving department. The surplus hardware was then shipped via the company’s internal shipping mechanisms to the loading dock nearest the new “buyer.”
What a brilliant way to reuse equipment within the company! At IBM, we called this spending Blue Money, money that had already been spent and now should be maximized and used again as many times as possible since it wasn’t costing the company anything with outside vendors. Perhaps there is some “Blue Money” in your organization that could be better utilized somewhere else.
At The Mimix Company, we’re working hard to invent the next generation of reading, writing, and research tools for computer workers of every kind. We believe that better tools give better results. We invite you to check out the whitepaper and join our efforts!
August 26, 2018
Ted Nelson’s Xanadu is one of the most important and most maligned ideas in computer history. Why has it held such sway after more than 50 years? Why isn’t there any working software for download or a GitHub community of devs? Is Xanadu so difficult to achieve that no one call pull it off, or are there other factors? These are questions I’ve attempted to answer for myself.
Truthfulness relies on an unchanging record of the past.
Gary Wolf’s 1995 investigation for Wired into what he called The Curse of Xanadu is the de facto online history, not only for its scathing criticisms of Nelson (which he later disputed in an open letter) but for the rich detail and fascinating revelations about the people and companies behind the project. Where else could you find out about Autodesk’s involvement and the $5M they spent on trying to achieve Nelson’s dream?
Certainly personal factors and generally being viewed as a crazy person are problems shared by all inventors and geniuses. If it were just Ted’s fault, others would have come forward and made Xanadu without him. A deeper analysis shows that there are technical and market reasons that keep Xanadu from being real. They fall roughly into four buckets:
- Things that already exist
- Things we don’t need
- Things Xanadu proposed but didn’t deliver
- Things Xanadu is missing
Where It All Started
Nelson’s books Computer Lib and Dream Machines, published as a single volume and bound together, are the geek’s version of Be Here Now by Ram Dass. The books are amazing and indescribable, remarkable for their breadth and freshness, completely unorganized, and impossible to navigate. These are the first hand accounts of a brilliant traveler to a new place, witnessed and narrated for us in stream-of-consciousness style. Ram Dass was also a former academic turned on by the new and drug-induced culture of the sixties, and like him, Nelson conveyed enough “key driver” ideas to have inspired generations since. By 1980 in Literary Machines, he had formalized the main goals of Xanadu into a list of 17 rules. Let’s see which of these buckets each rule falls in.
17 Rules of Xanadu
1. Every Xanadu server is uniquely and securely identified.
We could say that every device today can be uniquely identified by an IPv6 address, a MAC address, or a domain name. “Securely identified” is an interesting phrase, but blockchains like Bitcoin meet the requirement. This Xanadu rule goes in the “exists” category.
2. Every Xanadu server can be operated independently or in a network.
This is true of nearly all computers, so it exists.
3. Every user is uniquely and securely identified. This is the same as rule #1. With peer networking, there is no difference between servers and users. Already exists.
4. Every user can search, retrieve, create and store documents.
This one definitely exists.
5. Every document can consist of any number of parts each of which may be of any data type.
HMTL covers this requirement in an ugly way. I would not vote for HTML as the way to organize composite documents going forward, but it does exist.
6. Every document can contain links of any type including virtual copies (“transclusions”) to any other document in the system accessible to its owner.
This definitely does not exist yet and it’s the thing that seems to drive Ted crazy. We’ll get to why it doesn’t exist in a minute. It goes in the “proposed” category.
7. Links are visible and can be followed from all endpoints.
This is another aspect of Xanadu that’s only proposed and doesn’t exist. Really, it’s a feature of rule #6 in that once a document is transcluded in another, you can see and follow links both ways.
8. Permission to link to a document is explicitly granted by the act of publication.
This is the first example of a Xanadu feature we don’t need. A tool for readers and writers should not get involved in the concerns of platform or content providers.
9. Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (“transclusions”) of all or part of the document.
Again, we are messing where we shouldn’t be messing. This “feature” is only here to enable rule #15.
10. Every document is uniquely and securely identified.
This is a terrific idea and sorely needed. Enormous problems are caused by making humans take on the naming and management of documents. Is a blockchain answer the way to address this proposed idea?
11. Every document can have secure access controls.
It can be argued that we’ve claimed to have this forever but the current systems are all pretty bad. Today’s best solution to secure access is public key encryption like we see in blockchains, but losing a key has catastrophic results and will require better custodial and backup solutions in the future. I’m putting this in the “exists” bucket.
12. Every document can be rapidly searched, stored and retrieved without user knowledge of where it is physically stored.
This absolutely exists in many forms, the most obvious of which is Google. Of course, Ted is talking about a more personalized system of searching and saving documents that interest you, but the tech exists.
13. Every document is automatically moved to physical storage appropriate to its frequency of access from any given location.
This isn’t even a user feature, it’s housekeeping. Not needed. If sharding and redundant access to remote documents is required, solutions like IPFS are coming online to provide that.
14. Every document is automatically stored redundantly to maintain availability even in case of a disaster.
This is more housekeeping but definitely useful. Systems already exist to do this. Bitcoin is a great example of redundant storage of common data which is needed by many people.
15. Every Xanadu service provider can charge their users at any rate they choose for the storage, retrieval and publishing of documents.
Could this be the straw that broke the camel’s back? The requirement to turn Xanadu into a copyright management business certainly isn’t a user feature nor necessary technologically.
16. Every transaction is secure and auditable only by the parties to that transaction.
This is a terrific and important idea which is only now coming to fruition, especially with privacy-focused cryptocurrencies like Zcash. Although recent, it exists.
17. The Xanadu client–server communication protocol is an openly published standard. Third-party software development and integration is encouraged.
And strangely, here at the end, is a rule that was violated even before work began on the code. Perhaps Nelson didn’t believe enough in his publishing business model to allow anyone to see the software as it was being developed. This aspect of Xanadu was only proposed and not delivered.
The Missing Rules
I mentioned earlier that, in addition to what was proposed, Xanadu is missing some features that would help it attain commercial success today. Let’s examine them.
18. The system provides multiple views of the same data and makes it easy to switch between them.
This idea is from Douglas Engelbart and his NLS system. Xanadu was never shown with multiple views, such as slideshows, timelines, or outlines.
19. Text is stored without formatting.
This idea is discussed at length in Nelson’s papers and even mentioned in Computer Dreams, but it’s not on the Xanadu rule list. It should be. Today’s systems mix text, formatting, and data structure which makes it difficult to extract any of these later.
20. Metadata can be canonical.
This idea, which I expound upon in the Mimix whitepaper, addresses Nelson’s desire to have certain data or documents accepted as absolutes, such as the signed copy of the Declaration of Independence and its previous versions.
21. Data is atomic.
Another idea intrinsic to Xanadu but not revealed until later was that it was to hold all data in unalterable “atoms.” What appeared to be documents on screen, including your own writing, were merely atoms strung together. Ted described a complex system of altering data later and changing its address, which never took off.
22. Streams can be shared, disassembled, and re-used.
Although Ted’s papers refer to the idea of a set of Xanadu documents as a “stream,” he doesn’t develop the idea of how streams would be shared or utilized by others. These must be built-in to the system and offer abilities beyond today’s document or app sharing.
23. Your data is kept private.
End-to-end encryption is necessary today for many reasons. Only you should have access to the materials you read and write inside Xanadu or its successor.
Sorting It Out
If we rearrange the rules into the four buckets: Exists, Unnecessary, Missing, and Proposed, we get something like this:
A quick glance at this diagram shows that much of what’s needed to create Xanadu already exists or can be cobbled together from existing pieces by using API’s. An interesting side-effect of diagramming things this way is that it makes clear what was proposed but not delivered. The first two (#6 and #7) are the things that Ted talks about today in his YouTube videos.
Delivering on the Promise
If half of the problem has already been solved (or doesn’t need to be), we are left asking the question, “How difficult are the remaining pieces?” Judging by what’s left of the original 17 rules, I would say, “Achievable.” The last rule, open source, we can dispatch by simply starting from that point. Mimix or any worthy successor to Xanadu must be open source.
The Holy Grail of Xanadu was transclusion, the idea of including part of one document inside another and being able to refer back-and-forth between them. We’ve actually had transclusion for some time. Microsoft delivered it in their OLE object linking and embedding technology and HP went so far as to ship an entire version of Windows, called HP NewWave, based around transclusion of atomic objects. Today you can create a logo in Adobe Illustrator, transclude it in a Photoshop document, then double-click it from Photoshop to edit the original in Illustrator. But what Ted is referring to is transclusion while reading others’ works and writing your own. This, we do not have.
It turns out, that kind of transclusion is actually easy. It requires two things which were missing from Nelson’s proposal. The first is massive storage. It simply wasn’t possible to keep thousands of books or papers on the personal computers of the day, or even on your personal account on a university or corporate computer. Instead, Ted thought we should link to copies of the same data. That made sense in the days when schools and institutions had all the computing power and storage, but that’s not the case today. You can buy an external terabyte drive on Amazon for $50. Cloud storage is cheap, too.
Nelson liked to use the example of the Declaration of Independence, and so do I. While recognizing that documents would need to be copied and virtualized, he held to the idea that links would be to the original document or its alias. He also suggested that all links would be bidirectional, meaning that any elementary school child who quoted the Declaration of Independence could expect to have his or her paper linked back from the original document. That’s simply preposterous. Imagine the huge numbers of scientific papers which all refer to facts in the periodic table. Should visiting that table produce links to all of them? Balderdash!
If we think of transcluding a document from our own local storage rather than from some remote, shared location, we can greatly simplify the problem and produce a system that actually works for the user. Instead of linking to a remote copy of the Declaration controlled by someone else (or a copy of a copy), why not just keep a copy in your own local storage? If it comes from a trustworthy source, you can rely on it going forward. Taking the Declaration from the Library of Congress is a 100% reliable source. In Mimix, we call this canonical metadata. It’s not necessary for you to refer back to the original Declaration, just pull up your own (canonical) copy which is known to be correct. As I’m writing now, I’m referring to my own saved versions of all these documents I’ve mentioned.
With multiple terabytes of personal storage, even the most prolific author or researcher could have his or her own copy of every document he or she ever examined, along with all the user’s annotations. The periodic table (or a book about the history of France) isn’t going to change after you download it, and if it does, that’s easy for software to find and fix. You would still have your old copy, of course.
In working on my nuclear book, I’ve amassed a collection of over 250,000 pages of PDFs along with my highlighting and notes inside each. There’s no software I’ve found that’s really up to the job, but the terrific and free Docear (rhymes with “dog ear” as one would do to a book) is a good place to start. Its interface is tedious and not well integrated with a writing environment. It also doesn’t make any connections for me between my data and my writing, nor offer any alternative views. But it does let me keep my notes together with the original sources they came from and let me find either of these when I need to.
Keeping a copy of every document you read in your own personal storage and accessing them with the same tool that you use for your own writing opens the gateway to a true Xanadu-style system. Now, if we draw links between our writing and the documents we read, it’s easy for the system to retrieve the referenced files and show them on the screen at the same time — even with the graphical links that Nelson preferred. After all, we have both documents on our own disk! In this environment, it makes to sense to show links from every document as well as to it. In other words, every paper that you read or write about the Declaration of Independence should be linked from your view of that original document, but not from every else’s in the entire world.
The storage issues notwithstanding, I think Nelson skipped over the idea of local copies because he was trying to build a business around publishing and copyright management. The idea of every user having his or own copy of a document would be anathema to such a system. All three unnecessary features of Xanadu hinged on the publishing business model. In a last ditch effort to commercialize Xanadu after Autodesk pulled out, the owners approached Kinko’s (now known as FedEx Office) with a copyright and document management system. Even for them, it was unclear who would get paid and how. The entire idea of your writing and research tool being bound-up with a publisher and his royalties is simply awful. This, at the end of the day, is probably why Xanadu failed.
Building the Missing Pieces
If the missing pieces I’ve described are truly missing in that no deployed system exists which offers them, then we have something to aim for. If we can build a program that incorporates all of the existing aspects, skips the unnecessary ones, delivers on the unfulfilled promises, and fills in the holes in the original idea — then we can have a salable product! LOL.
We’ve already seen a brute-force way to include the existing technologies. A variety of frameworks, application programming interfaces, and services can cover everything in green on our chart. It would be preferable to build a new system which incorporates open source code for these features rather than relying on outside providers to continue to offer them. The biggest unfulfilled promise of Xanadu was transclusion and we’ve already seen how that can be done by simply reading and writing with a tool that keeps copies of what you’re doing. All that’s left are the orange items on our chart.
We work with multiple views of data every day, but the process is messy and mostly manual. We often need to move data from something that looks like a list to something that looks like an email, a chart, or a slideshow. Today’s tools offer a million ways to present data but it’s still surprisingly difficult to move the same information around into different views.
Douglas Engelbart’s revolutionary NLS system had this idea built-in from the start, in 1968. Tragically, NLS was almost impossible to use. If anything, it offered the antithesis of a user interface and died commercially after a series of acquisitions and mergers. Along the way, we lost the idea of a consolidated system for multiple views.
Mimix proposes to re-frame the current document paradigm into a system of data called atoms and ancillary data about your data called metadata. If you are writing about Gaudí, the factual sources you consult would be metadata. So would any formatting you applied. Together, these are Mimix views.
I’ve mentioned formatting as an example of metadata because it needs to be separated from the writing itself. Nelson has complained that today’s WordStar and SGML-inspired systems are messing up the real data we’re after, and he’s absolutely right. Mimix does not allow the co-mingling of data and formatting and these aspects are formalized into the design. Formatting is just another kind of Mimix metadata which can be applied or removed, both of which are recorded in the stream.
Nelson put a great emphasis on the idea of referencing the “one true version” of a document (or its copy or alias), but he didn’t address how that truthfulness would be determined. This is a real problem which hasn’t been addressed in any software that I know of. Taking a simple example, a college student might write a paper for music class without being exposed to such divergent concepts as diatonic set theory or serial music. “Google it,” we might reply today. But this is an artifact of the library card catalog, itself an artifact of the printing press. Instead of telling children “Let’s go to the library,” we tell them to go to Google. This is something of an improvement but we’re not home yet.
Much of the data that children would find in a library could be called canonical in computer terms, or simply “correct.” Everyone who inquires as to the distance to the Sun should receive the same answer. In this world of fake news and unreliable sources, it is more important than ever to declare and classify data according to its authenticity and sources. A more complex example is, “Who shot JFK?” What sources would you choose as canon? These are decisions best left to each individual author, but his or her choices should be clear not only to the writer but also to readers. In a Mimix stream, the author’s canonical sources are available to inspect and alter.
Truthfulness relies on an unchanging record of the past. Although facts and opinions change, our interactions with them in the past were based on our knowledge then. Our actions in the future must be based on the future values. The values that some data had yesterday were the basis of all of yesterday’s decisions and should not be erased. When changes are needed, we can simply record a new atom of data with the new values.
Nelson referred to this idea in his paper on the Xanalogical Structure of Documents, but the proposed method of accessing and “updating” these atoms was unnecessarily complex. Using the mass storage now available to us, we can simply snapshot every piece of data as it comes in and also snapshot every change to it. By replaying our snapshots, we could see where the data started and how it had changed.
Whenever there are long streams of documents and you need to only access a small part, some kind of addressing system is necessary. Two programmers working for Nelson came up with a system called tumblers based on the transfinite numbers they had studied in college. The method relied on literal rows, columns, and character counts to identify chunks of text, making it fragile and unreliable after text was changed.
Since the 1950’s, we’ve had regular expressions. I’ll be the first to admit that I hate them, but they do work. Rather than trying to specify the literal character positions of the data we want, regular expressions let us find the text we need syntactically. To grab the Preamble from the Constitution, I only need to write:
/we the people(.*)america./gi
Once I’ve referred to that piece of the original text, I could format it any way I like, display it on a slide, or quote it in the Schoolhouse Rock video that still rings in my mind. Selecting just the first letter and applying special formatting like a drop-cap is easy with regular expressions, too.
The Constitution isn’t changing anytime soon, but how does the concept of atomicity apply to your own writing which might change frequently? Let’s take two very different use cases. A screenwriter needs to create characters and then draw from many resources to develop a narrative around that character. There are sure to be numerous references, notes, illustrations, etc. and these will grow and evolve over time. But keeping them atomic lets the author go back and see the original data from which the notes were derived. If some material had been deleted, it would be impossible to refer to it going forward but easy to access it going backward.
A doctor needs to keep copious notes on his or her patients as well as a variety of canonical documents such as lab tests or radiology studies. Every prescription written must be maintained. There is some list of what drugs the patient is on right now versus all the drugs he or she has ever been prescribed. Today, this is mostly in paper files. In Mimix, the patient is an atom and all of these ancillary data are applied to that atom over time. The metadata itself is atoms, too. In other words, everything known about Vioxx at the time it was prescribed to patients is quite different from what is known now.
Sharing today is second nature and we have all kind of systems for sharing screens, apps, and they content they produce. Unfortunately, none of these systems makes it easy to do anything useful with the data after it’s shared. These are siloed applications in computer terms with data only moving up and down in the same system but not outside it. A corporate webinar or conference keynote makes a great example. Some useful content might be displayed by the presenters but it’s trapped in the video recording and can’t be extracted. At best, the audience gets a list of links they can click later. Most of the valuable information in the content stream is lost unless the viewer keeps separate notes.
Mimix introduces the idea of streamsharing, giving others a functional copy of your own research from which they can extract data, metadata, and annotations or add their own. More properly, Mimix recreates this idea from the original work of Vannevar Bush who in 1945 proposed a stream recording system called Memex based on microfilm. A Memex user who received a reel of film would be able to extract any frames, rearrange them, print them, delete some, and compile the whole thing onto a new spool. The medium carrying the information was the information itself, giving the recipient the same powers as the author.
My Mimix stream recording idea differs in several ways from app or document sharing. First of all, once shared, the stream belongs to the recipient who retains all its contents. Secondly, the stream is live and can not only be played linearly like a video but also accessed for its atoms and metadata. The new user can extract or discard any part of the stream he wishes, a concept not possible with today’s software. Because Mimix data is not bound up in documents or presentation formats, the system can show the stream recipient a summary of everything inside it and allow that person to choose what he or she wants to extract.
In addition to selectively importing the stream, a recipient is free to rewind and make changes to a “past” view of the stream’s contents. The audience member or potential new inventor could discard some part of the stream’s metadata halfway through and substitute new data of his own, changing the views thereafter. Scientific researchers could use this feature to look through old papers with the lens of new information, for example.
A Mimix stream naturally incorporates all the documents referenced in its metadata, so a Shakespeare commentary stream would include the full text of all the plays mentioned as well as any other reference sources the author consulted. The massive storage of today’s systems makes it possible and desirable for each reader and writer to have a full digital library which he or she can annotate and reference going forward.
I leave for last the topic that came first in the history of computing and that is encryption. Your favorite news source is full of stories about why we need encryption, from corporate data breaches to government hacks and rampant surveillance. The earliest computers didn’t worry much about encryption for several reasons. Many were released before strong encryption techniques were popularized. Home computers didn’t seem to need encryption (back then) and they didn’t have the processors or storage to provide it anyway. Corporate and university computers were managed by the institutions that owned them and users didn’t worry about how to protect the data.
Things have changed drastically since then. The prevailing idea around security is, “Be afraid. Be very afraid.” Any reasonable successor to Xanadu (or any non-trivial software, really) should provide end-to-end encryption as a built-in and non-optional feature. In a system like this, the Mimix software would only store “garbage data,” of no use to anyone without your private key to decrypt it. When you share your streams with others, a different set of garbage data is sent to them which only they can decrypt. When your data is backed up in the cloud or elsewhere, only the garbage data is recorded, requiring your key to decrypt it later.
As mentioned earlier, this kind of public/private key encryption system comes with a risk. If you can’t decrypt your data, neither can anyone else. This reinforces the need for key backup systems and custodial oversight, such as making sure a trusted friend or associate has your key.
Can We Get to Xanadu?
I think we can. If we incorporate the technologies that already exist and build the missing pieces, I see no technical barriers to its completion. As for the market, it’s huge. Like word processors and spreadsheets which were once expensive and limited to big business owners, a tool like Mimix which is a hit with early adopters (education, scientific, medical, creative) can evolve into a version for everyone.
Key West, Florida
August 26, 2018
August 12, 2018
I’m pretty sure I was born a geek. I’ve always been interested in the latest technology, forever holding out hope that tomorrow’s inventions would make the world better than it is today. At the very least, maybe technology could one day make using technology itself easier. Alas, that has not been the case.
Not long ago, I did a brief stint as a part time employee at my local Sprint store. That will need to be its own blog post. Perhaps if I’m feeling ambitious I could aim for something akin to Nickel and Dimed by fellow Key West author Barbara Ehrenreich. Anyway, I digress. While working at the Sprint store I got to see on a daily basis how Apple’s most advanced and consumer-friendly software (in the form of iOS) was interpreted by real people in the real world. It’s surely different than what Apple imagines.
It’s crazy to think that Apple is the world’s first trillion dollar company. It was started in Steve Jobs’s garage and the original Apple I was designed and hand built by his buddy Steve Wozniak. The other Steve left the company in the early 80’s. Though never giving it as his reason for leaving, Wozniak did say that after having a child he realized that technology wasn’t going to be the thing that saved mankind.
In those early days of personal computing, regular people could not only program their computers, it was expected of them! It was assumed that the computer owner would be in complete control of his machine. Of course, this would entail a learning curve, but the rewards were huge. Any college student could write whatever software he wished, save it on a disk, and sell it. An enterprising young man named Bill Gates got his start doing just that. It was as easy as putting a disk in the mail.
One of the most important programs for early computers was Hypercard by Bill Atkinson. It was revolutionary for two reasons. First, it let normal people design software with a modern graphical user interface based on stacks of cards. Secondly, it shipped free with every Macintosh at the insistence of its inventor who gave the product to Apple only if they agreed to that condition. Hypercard was dreamed up during an LSD trip and was clearly something designed around the idea of a “user-centered computer.” Apple had to make it go away, and they did. Hypercard would have hurt their applications software business too much, so the company thought. The company never really had an applications software business to worry about and in screwing-up Hypercard they missed out on the chance to define the web browser, which is what it would have become.
The people like Steve Jobs who made those decisions put an end to the ecosystem that made them rich. They took away the structure that they used to gain their power, transferring all of the creative and financial output of the old systems to themselves. Today, any app that you can think of will require the approval of one or more of the four big tech players: Apple, Google, Facebook, and Amazon. You will have to write your software using tools and languages they approve of. You may not discover or extend any device or platform features. In fact, quite the opposite. You’ll be constrained by your tech “partner” in the set of tools and features you can use. In this world, not only do you not own your software (or device), neither does the end user. Apple or Google will decide what code you can write and how it might interact with your device.
On one hand, this sounds terrific. In a nerdy fever dream, we could be done with “bad” software and security problems. We’d never have to worry about updates or compatibility. Our software stuff would just work because somebody big was taking care of it. Well, guess what? All of this totalitarian control over your software and devices didn’t deliver on any of those promises. It actually made them worse. We spend more time today on updates, security, compatibility, and learning to use our tools than people did when they were running WordStar on a CP/M machine in 1982.
And not only are your software and devices controlled by the big guys, your content is, too. Even content you create. How is it possible that Google or Amazon should decide what files you can own and where you put them? Yet every day, millions of people lose access to their own work because of these platforms and systems. They made the mistake of writing something under the “wrong” account. Or they saved their contacts in a different “app.” It’s as though buying a Joni Mitchell 8-track gave her record company the right to open your front door and rifle through your music collection, taking what they please. And you better not forget your Joni Mitchell password or we might not ever let you hear her again!
I’m just old school enough to still subscribe to the idea that you own your stuff. I also cling to the fantasy that every company should try to make something better than the next guy. In the old days, we sold software because it was good. It did something people wanted and were willing to pay for. The first spreadsheet, Dan Bricklin’s Visicalc, was so great that people bought computers just get their hands on it. It was the world’s first “killer app.” Are there any killer apps today? No. No one gives a shit about animoji. Today, you are the product and software and devices are forced on you piecemeal in order to sell you to advertisers, politicians, and governments.
The foxes are running the hen house. Some will say it was always this way. The new book Surveillance Valley by Jasha Levine posits that the entire internet was nothing more than a psyops and surveillance tool from the start. While doing research for the Mimix whitepaper, I read several of the manuals and papers for Douglas Engelbart’s NLS, an important and early precursor to almost all of today’s modern computing concepts. Every document acknowledges its Army or Defense Department funding, dutifully quoting the contract numbers. NLS was also an early use for ARPANET, the DoD project which became the internet.
Before that, IBM got its start in helping the government compute ballistics trajectories and using their new punch card machines to count Nazi concentration camp prisoners. Much later, while I was there, the company defended its support of the South African government with its intentionally racist apartheid policies that resulted in incalculable human suffering. The whole history of computing, really, is traceable to military and government atrocities. Maybe they just needed it more than “The Rest of Us.”
These and countless other examples aside, I’m still a believer in digital democracy. Software is like castles in the sky. Really, we can build anything in software — including things its inventors didn’t want. This is at the core of hacking, maximizing what can be done with software and hardware. Pioneers will always find a way to push the tools and tech in a new direction in order to work around the established order. I do think that Jobs and Wozniak had that in mind, at least at the start. Many other people have done serious and important work in making computers positive tools for society. A personal favorite of mine is Seymour Papert, an educator and inventor of the LOGO programming language. He believed that learning to program could help children to think better, giving society real hope for the future. Papert’s work and writing provided a computer epiphany for me in my own explorations with LOGO. It was he who made me realize that software was “castles in the sky” without limits. Papert was South African.
Microsoft, a company which grew up in an open and competitive environment with thousands of small players, came to dominate the business computer landscape and leave only a handful. But you can only drink so much of your own medicine. Today, no one seriously thinks of Microsoft as a long term factor in the computer industry. They’ve stopped innovating. Today, Microsoft aligns itself with Linux, with open source software, and even with gaming platforms like Unity that they didn’t invent. If they hadn’t, the company would be half its size already. People outside the Microsoft philosophy created other ways of doing things and those ways might win.
Can software return to some of its former values of user-centered control and openness? We’ve seen the world’s biggest companies exploit open source software to make billions without having to pay anyone for it, yet it’s the paid products of those companies that most people buy rather than the open source tools themselves. It’s an “embrace and extinguish” approach. But if techno monstrosities can use open source software to create upheaval, so can anyone else.
One area of tech where companies have embraced their retro values is in electronic music. My favorite synth company, Roland, has a full line of vintage synths based on modern technology, complete with their knobs and buttons and noticeably lacking screens or “software.” You can even go back to plug-out synths with real cables if you’d like and still use your modern DAW. Roland has also begun partnering with other companies on circuit design and re-creation. You see, unlike Apple, Roland can’t lock you into an ecosystem. Moving to another synth is as easy as putting your hands on the keys. Music instruments have to earn the player’s business because they have to be loved.
Does anyone love software anymore? I love retrocomputing, the use of old systems for entertainment. Of course, nostalgia has a lot to do with it. But I play with emulators for many machines I never used in real life. They’re so blissfully free of updates, internet connections, and other modern computer baggage that you can actually do something entertaining with them. You can program them. And, of course, you can add all that modern stuff to them if you want to. In 2018, you can program a Wang 700 calculator by loading software from a web address.
You know what? In Lisp, you still can, though you’d write ‘(HELLO WORLD.) in that language. I think that if we’re going to go back to our retrocomputing values we need to start there, with a self-contained machine that’s immediately responsive to simple commands. It should be insulated from the outside world (not dependent on it), yet able to draw resources from there when the user chooses.
One thing early computing pioneers didn’t address was encryption, an interesting oversight since it is literally the foundation of all modern computing in the form of Alan Turing’s work and inventions. Could it be they didn’t want the plebiscite to have privacy? In any case, a modern computing system focused on the user would need to have end-to-end encryption. In other words, only “garbage data” would be stored on the system itself, requiring a key to unscramble it into something that could be read or changed.
The use of encryption to protect data from unwanted spying or changes is part of the blockchain technology that powers Bitcoin. Bitcoin also relies on a network of peers to provide a big set of backups, a quality that could be useful to anyone for saving his own work. Today’s systems don’t have enough speed or storage to shard everyone’s files and replicate them in a distributed way. But the demands of software always exceed the capacity of hardware and those systems will be developed.
I can’t bring up “future computing” without mentioning Magic Leap, who this week released the Creator Edition of its augmented reality headset. The system projects 3D digital objects and characters into your eyes in a way that makes them appear to be in real space. Sort of. Magic Leap founder Rony Abovitz is convinced that his system is how we’ll want to interact with computers in the future. And Samsung thinks you want to talk to your washing machine. I’m not so sure.
It seems to me that people might prefer that a future computer act more like one from the past while retaining some important great inventions from today. The internet has to be there, along with encryption. We need modern devices and user interfaces. But underneath that we need privacy, reliability, simplicity, and control. We need comprehension and systems tailored to our needs, not those of platform providers.
Thanks in advance,
August 11, 2018
I love designing logos for other people, but I usually hate doing it for myself. In the case of the Mimix logo, there were three immediate inspirations. I wanted the logo to have a sixties or historic feel because the ideas behind Mimix came from that era, and so did I.
One was the Apple /// logo with three forward slashes. When I was growing up in Silicon Valley, the neighbor across the street had an early Apple ][ while the other neighbor next door had one of the first IBM PC’s (and a gorgeous card punch machine the size of an executive desk!) I like the idea of “turning computing around” from what it has become today, so let’s turn Apple’s model name into three backslashes.
Secondly, I wanted the Mimix symbol to look like any other math symbol that might appear in an equation. This is a nod to Alonzo Church and his lambda calculus which in the 1930’s offered a new and more advanced way (over Turing) to represent computer operations. It was to become the basis of symbolic computation and the Lisp language that underlies Mimix. It has been said that any sufficiently powerful computer language is nothing more than a subset of Common Lisp, and the more you study that language, the more it appears to be true. The reason is that Church solved the basic problem of how to represent symbolic functions. Everything after that is just an abstraction over his work.
Finally, for a color palette I turned to the PDP-11, one of the first machines to run Lisp. It was certainly gutsy and sexy as hell in its orange and purple colorways straight out of a PSA interior. PDP’s caused serious computer lust in the hallways of the universities that had them. While that DEC model was a bit too early for me to be involved, I did get to program one of their VAX machines while hanging out as a latchkey kid at my Mom’s office. The VAX’s printed manuals in their orange vinyl covers definitely had their design roots in the PDP color family.
The final logo has a drop shadow, but not a soft, modern one. The shadow itself is almost mathematical, like something that would be produced by an old computer. Everything about Mimix is something that was envisioned for an old computer. Sadly, that computer has not yet come to be.
Thank you in advance!