• Background Image

    Mimix Memos

December 11, 2018

5 Things I Learned from One Feedback

A few days ago, I shared my blog post, Getting to Xanadu, on Hacker News. I must say, I really enjoy the stories and the comments there. It’s an audience that’s a cut above the typical crowd. Exemplary of the kind of people you find hanging out at HN, a gentleman named Stephan took the time write some very deep and well thought-out comments on our forthcoming product, Mimix. Here are five important things I learned from that dialog. The unedited email conversation and some final, deep thoughts, are below.

Note: Stephan isn’t a user yet. We’re pre-release. But he’s certainly the kind of user we’d like to have, so I think of his letter that way.

Ted Nelson is the inventor of hypertext, an undisputed computing pioneer, and a polarizing personality. His most controversial invention, a piece of software he called Xanadu, proposed a new way of reading and writing.

1. Community Is Everything

Mimix is my baby and the first time I’ve open sourced a project. I’m not used to doing development in public and it’s taken some getting used to. I was hesitant to bare my soul somewhere like HN because I know what kind of quality people hang out there. But you know what? It’s been very valuable to do so. Especially in open source, a movement based around benefiting the end user, developing a community early on is everything.

2. Users Are Smarter Than You Think

Of course I expected HN readers to be smart, but I was delighted to find that lots of people are interested and well-informed on some of the deeper aspects of our subject matter. Too often, I think, corporations treat users like dummies. There’s a lot of great information out there from people who are interested in your product. It pays to pay attention to it.

3. Hard Ideas Need Easy Explanations

As much as people might get your product, it takes more steps than you think for the message to really be received. If your product is really revolutionary and does something better, it will seem obvious to you. It’s not to your users. I need to work on a lot of my own explanations and make things more clear. Do you?

4. Time Is Of The Essence

Another word for this is “courtesy.” Personally, I hate it when people aren’t responsive. When it comes to software, it’s insulting and it makes you feel like a product has been abandoned. I live by an open inbox and I reply to the people who write me. The result? Better quality stuff to read.

5. Consider Having a Wiki

Putting together this post which has a lot of valuable information has been something of a nightmare because non of it is easily indexed or cross-referenced. Since I don’t have Mimix today to write this with, I’m thinking a wiki would be a good idea. At least this way the pieces will get posted in sections that make sense. What do you think?

Although it seems anachronistic now, the Encyclopædia Brittanica was easier to reference than this blog post. My Texas Granny, Pearl Bethune, sold Britannica and went on to author two nonfiction books of her own. The second, Forward to the Past, required copious research… and metal printing plates!

The Full Email Thread

Stephan’s original message…

Getting to Xanadu: today, we can easily identify a few reasons why Xanadu, apart from any of Ted’s “faults”, has difficulty to become real. One major aspect, in my opinion, is that with the advent of Word, DTP and the web, there’s little to no interest in text-related stuff in 2018, even less so money in that area, while at the same time ignoring hypes like AI or blockchain, augmented reality with incredible opportunities lies there untouched (how ironic that there is the 50th anniversary of Doug’s augmentation demo and AR isn’t a topic at all), so one would be stupid to invest in text stuff and ignore the opportunities for big $$$ (supposed one would be able to pull it off). That was very different in Ted’s time. Without the GUI and networking around yet, if people weren’t crunching numbers for a business, text on the computer was a great new idea, and the later 80s-style (dedicated) word processor hype demonstrates that easily. What people had in those days, is, again, lost in our days, and only few people know what it is or know that they could want it, and even if so, there are no offers, because it makes no sense economically/investment-wise. Sure, at least there’s the minimalist/distraction-free movement now, I have to spend the time to look through all those isolated, productized, proprietary things but fear that they’re not really usable for an open hypertext infrastructure.

Caption under the Xanadu mockup: Well, but neither Ted nor most other mortal beings had any access to Doug’s screens. Also, Ted claims that he wasn’t aware and didn’t know about Doug up to some point, don’t know if that mockup was earlier or later. Anyway, I think Ivan Sutherland was earlier than Doug with Sketchpad, so the fact that such graphics had been demonstrated by somebody some years earlier doesn’t help any of the other inventors with their own work, if it wasn’t available and not generally known. Remember, Doug got some inspiration from WW2 radar screen renderings.

The 17 Rules of Xanadu, by Ted Nelson

  1. Every Xanadu server is uniquely and securely identified.
  2. Every Xanadu server can be operated independently or in a network.
  3. Every user is uniquely and securely identified.
  4. Every user can search, retrieve, create and store documents.
  5. Every document can consist of any number of parts each of which may be of any data type.
  6. Every document can contain links of any type including virtual copies (“transclusions”) to any other document in the system accessible to its owner.
  7. Links are visible and can be followed from all endpoints.
  8. Permission to link to a document is explicitly granted by the act of publication.
  9. Every document can contain a royalty mechanism at any desired degree of granularity to ensure payment on any portion accessed, including virtual copies (“transclusions”) of all or part of the document.
  10. Every document is uniquely and securely identified.
  11. Every document can have secure access controls.
  12. Every document can be rapidly searched, stored and retrieved without user knowledge of where it is physically stored.
  13. Every document is automatically moved to physical storage appropriate to its frequency of access from any given location.
  14. Every document is automatically stored redundantly to maintain availability even in case of a disaster.
  15. Every Xanadu service provider can charge their users at any rate they choose for the storage, retrieval and publishing of documents.
  16. Every transaction is secure and auditable only by the parties to that transaction.
  17. The Xanadu client–server communication protocol is an openly published standard. Third-party software development and integration is encouraged.

Stephan continues…

The 17 rules, I haven’t looked into them previously, I suspect that they’re in part influenced by the attempt to operate Xanadu as a commercial business. For example rule 1, why does one need that? Only for transcopyright I assume, and there are serious issues with that of course, I didn’t find resolved anywhere yet. Now, IPv6 addresses, MAC addresses and domain name resolving can all be changed, so the unique identification is only a probability, not relying on certificate cryptography and signing, HTTPS fortunately provides that for some servers, if there’s also some pinning going on, but can be done today. I don’t know how blockchains or Bitcoin would help with any of that, probably just added for some buzzword bingo. In general, the network is some sort of Turing test. You connect your computer/terminal/screen via a socket/cable in the wall to the outside world, you submit messages and get responses back, but how can you know that the network and outside world actually exists? Some anti-virus approaches fool a virus for analysis purposes by simulating an entire intranet full of people who send mails to each other and do things, so the behavior of the virus can be observed, while none of the people, services and network nodes actually exist, it’s all simulated on the network interface.

Rule 4, not so sure if that really exists. What about creating documents? How would a ordinary user today create a new document? On a server, yes, he probably doesn’t have his own, so one would become entrapped in Google Docs or Xanadu, that’s a very poor version, worse than creating a paper document. Little bit better with pastebin. Great, for 2018. Searching? Not really. Google searches, and Google finds, and it only shows at all and in the order it feels like, and not what actually has been searched and could be found. The options offered are very, very limited. And we can’t improve on it, because most of the web is unreadable data trash that needs a lot of code to make at least some sense out of it.

Rule 7: Erm…how?

Rule 8 is stated as a matter of fact. No publisher or content provider can prevent people from creating “links” after they’ve published.

Rule 10, one could claim that this exists, because that’s what the URL (+ canonical by extension) is supposed to be/do. Sure, most people misuse it, so we face some issues that Xanadu or whatever system might be awesome if operated with some discipline, but a lot of people are just sloppy, and we didn’t manage to enforce some of the rules, which is why a new, better system will need to be different from the web, and a lot of people who are used to the breaking, sloppy paradigm of the web won’t like a more rigid system (but those who do like, benefit a great deal from it — I guess a global digital library system isn’t for everyone, that’s a big difference in thinking that Xanadu could have become the web. Maybe, to some extend, but some people would have liked to do other things with it than Ted anticipated/anticipates).

The issue with rule 13 is that for transcopyright and transclusion reasons, I think Ted assumes this to be all within Xanadu servers/system. Otherwise, there’s CDNs and what not.

Rule 17, the free software movement didn’t exist at the time (on the other hand, all/most/some software was free per default). Online repositoriy hosting like SourceForge/GitHub/GitLab is fairly new, and Ted observed the “Open Source” community and of course didn’t like their lack of direction and design, so what can he do? There’s also a difference between the protocol and his actual software implementation. The protocol being published and open (don’t know if it’s actually published as a spec or in Literary Machines or somewhere else), allows others to build Xanadu-compatibles (besides the trademark and that Ted would have hated it for business reasons), the protocols are probably more out there to encourage adoption/integration with existing third-party software, not to build compliant server alternatives. Also, again, Udanax Green is out there.

Rule 18, don’t forget that Ted is also big on hypermedia. Doug/Ted interplay and differences is an interesting topic for study in itself.

Rule 19, I of course generally agree, but have some questions on how to make that happen practically, in terms of user interaction. Serious work is needed here, there are reasons why Larry Tesler’s modeless interface is the standard today. It all works fine if it is just text and some Engelbartian text-style/-based ViewSpecs, but what we do today with rendering/visualization/layout is a little bit different and more difficult than that. We generally lack the tools, and even Robert Cailliau agrees in the interview that the web of today is seriously lacking those things in comparison what they had at CERN on NeXTSTEP.

Rule 23, agree, but consider the implications.

For the conclusion, yes, things can be cobbled together, but part of the beauty of NLS and Xanadu is that those systems are integrated in terms of that all the parts/components are designed to play nicely with each other. I don’t have answers how to do this, but a few ideas how it could work.

Questions of transclusion, problems with copyright despite cheap storage, nonsense of PDF, not to discuss all of that here.

Agree that the business and copyright aspects are a big problem for Xanadu. Micropayments is fine, but enforcing them or copyright or transcopyright is in conflict with the nature of digital and immaterial goods. It would be a nice world if we could all agree and make it work, but reality is never that simple.

Building the Missing Pieces: it’s not only to build what’s missing, they need to be build “right”, also the things that are existing might not be “right” enough, and then a user needs to be found for it (not to call it a “market” at all, not to think about a “salable product” at all either).

Canonical Metadata, Ted is also promoting pluralism, so the one true version might not refer to a globally canonical one, but the one and only true version you’re linking to (to not wake up one day and find your link pointing to a different context, meaning, text). Correctness is somewhat difficult, see Encyclopedia Britannica vs. Wikipedia. Some clarity can come from the statements that somebody published at a particular point in time. Legal demands and abuse create issues for that as well.

I of course disagree with RegEx and snapshots.

Streamsharing, then the question is how to merge several conflicting individual variants, if that’s a thing in Mimix/streamsharing.

I don’t think that word processing technology from the late 70s or 80s is available any more, I believe it’s lost and abandoned.

So are you saying Mimix is expensive and limited to big business owners at the moment? 😉 Also, streamsharing sounds like one capability in Engelbarts jargon, or maybe whole systems could be built with it, but yeah…

Douglas Engelbart, best known as the inventor of the mouse, actually created much of what we know as today’s graphical computer systems. His Mother of All Demos recently celebrated its 50th birthday.

And here’s my reply…

Stephen,

Thank you for the well thought-out reply and for your willingness to take the time to look at what I wrote and answer it honestly. I want to give you the same consideration which is why my reply is so late in coming. I’ve moved my home and business from Key West to Texas recently which has consumed more of my time than I ever thought possible.

Because you were so detailed in your reply, I want to give you the same courtesy and reply to each of your comments individually.

You are absolutely correct that market attention has moved away from text processing applications since word processing became ubiquitous. In my estimation, this is a grievous error. In practice, much of the information we use in daily life is textual. Everything from what’s presented on the evening news to laws that we adopt is, essentially, textual. At first, it was thought that simple word processing applications would be the end-all, be-all in the text domain and take care of everything we need. But in 2018, that’s proved to not be the case. In this era of fake news and rampant manipulation and abuse of public information, it’s more important than ever to be sure that one’s writing is entirely factual and can be backed up with reliable sources.

In reality, it was always this way for serious journalists and academics. One of the primary goals of Mimix is to bring reliable, verifiable information to all levels of writers — from high school students to book authors. The second goal is to do so in a way that preserves privacy and choice and doesn’t lock users into a platform that wants to monetize or control their content. As the founder of The Mimix Company, I feel these ideas are revolutionary and the time has come for software that embraces and enables them.

I’ve heard both that Ted saw NLS and that he didn’t. I’m pretty sure he claimed at one point that he had seen it. We also know that Ted thinks that he invented everything and tends to be very disparaging about any other internet or UI pioneers, so we have to take his retellings with a grain of salt. I mention it in the caption not as a slight to Nelson but rather to give credit where credit is due to Doug.

You’re absolutely right that there is no such thing as network security — or any computer security, really. It was the first thing IBM taught us as mainframe programmers: “There is no computer security, only the appearance of computer security.” For my purposes, what we want is authentic documents and sometimes knowing where they came from is one way to achieve that. But you’re right, Ted’s purpose in rule #1 was probably to support his publishing and copyright business ideas. You’ll find no buzzwords in any of my writing. I mention the blockchain because it does represent an authenticated source for digital data and one that can be accessed and shared repeatedly while still remaining verifiable. I added the word Bitcoin to the sentence to connect the idea of the blockchain to its most familiar implementation. There is no token economy or cryptocurrency aspect to Mimix.

Regarding rule #4, I’m not going to recreate Xanadu or use Ted’s document or navigation structures in any way. I think those were actually awful. What I meant by creating, storing, and searching is that we all do this everyday with regular research and writing tools. We don’t need “Xanalogical documents” to get to Xanadu. There are many corporate and academic data silos that could be opened up with a tool like Mimix without expecting the authors to get involved or to rewrite them.

Rule #7 (These are Ted’s rule for what *he* wanted — not my rules.) is something he proposed but didn’t deliver. As I mention later in the text, this idea is ludicrous on the surface. The Declaration of Independence should not be two-way linked with every school child’s paper that’s written about it. Having said that, it would be very helpful to have two-ways links between your *own* documents such that you could open a research paper or book and immediately see what you had written about it. That would be tremendously helpful and it’s a key feature of Mimix.

On rule #8, indeed. It is up to each writer to decide how much he will steal or borrow from others and how much credit he’ll give them in return. This is not something we’re going to take up with Mimix. However, we do see a strong use case for Mimix inside organizations where their own proprietary data needs to be accessible to a wide but restricted audience. Mimix makes that data available to all of the people inside the organization who should have it, so that when they write and publish facts about their business, they’re drawing from the official, canonical facts.

Rule #10 definitely does not exist and URLs aren’t going to cut it. I’m talking about your individual, everyday writing. Who names the scanned email you just received? Who picks a name for the notes you just wrote? Who comes up with different titles for the 27 versions of your novel as you progress through writing it? No one. At best, today’s software gives files stupid names like Untitled Document 3 or Backup. In other words, the computer is of no help in organizing your own writing and your own documents — even though it’s omnipresent during that writing. This is just retarded. Microsoft Word starts out with a blank sheet of paper and Untitled Document metaphor because it’s trying to emulate a typewriter. We can do much better.

Rule #13 is really about housekeeping. Ted was worried that everyone trying to access the same copy of a Xanalogical document would crash the server. In Mimix, you have your own complete copies of all the documents you reference. They’re all on your own machine or on a network device you control. So instead of worrying about a server crashing we are looking at this from a housekeeping standpoint of making sure that, if the canonical copies of your documents change for some reason, you can get the new versions easily. A corporation’s employee handbook is a good use case. You can download or lookup a copy when you’re first hired, but nothing tells you when you refer to it that the copy you have is outdated and a new one has replaced it. Mimix can address this problem.

With regards to free software and rule #17, the entire computer industry started with free software. At the beginning, all software was free and provided by the computer manufacturers for two reasons: First, it sold hardware. And second, the company’s staff were often the best people to write such software as knowledge of this new domain was naturally limited. IBM only started charging for software later in the game. With the advent of personal computing, the cycle repeated and free software was first. Some of it came from manufacturers and much of it was posted in magazines, on bulletin boards, and shared at floppy swap meets. So I’d argue that we’ve had free software all along. One thing surely isn’t in doubt and that’s that free and open source software is the future of the entire industry.

On number #18, I think Ted was just shortsighted. He’d worked with paper documents that contained multiple and unique views for data. His own books Computer Lib/Dream Machines are extreme examples of text and graphics formatted together in unusual ways. And despite literally writing the book on this method, he never mentioned that Xanadu would have any of those capabilities. His favorite demo of Xanadu is parallel reading of two passages of text with interconnecting lines. That’s it. It’s not a very advanced view or a very nice way to read and write and we won’t be adopting that UI for Mimix, lol!

You’re so right about #19. The profusion of textual hypermedia has made things worse for formatting, not better. SGML was invented at IBM and is the precursor to both HTML and XML. IBM had better control over document formatting 30 years ago than we have with Google Docs today. We’ve gone backwards. It will be very hard work to fix this but it’s something I believe in strongly. The foundation of Mimix is a domain-specific language called MSL, Mimix Stream Language. In MSL, data is represented atomically in a write-only storage system. Formatting and edits are kept separate from the existing text. The task of actually creating a visual layout, then, becomes one of executing MSL on the viewer’s device. Mimix will initially ship with a rendering engine and an HTML-based user interface. Since the product is open source and the MSL language is simple and will be fully documented, we expect many people to create alternative interfaces and to continue to improve them.

On privacy and rule #23, it’s sadly ironic that this entire industry rests on the accomplishments of a cryptographer and data protection was the original job #1, but today few people really have data security. It’s up to software developers to build systems that benefit the user and not just the platform provider. With Mimix, it’s easy to share your data with anyone else that you choose, but impossible for anyone but you to get to it otherwise. Even The Mimix Company will not be able to access any of the documents anyone views or creates. This is by design as we are in the open source software business and not the spying or advertising business.

Cobbling together applications today is de rigueur. The industry prefers the term “code reuse” but it’s all about moving software higher up the food chain and abstracting away details, even for programmers. Every piece of software we use today is built from some aggregate of other people’s frameworks, and Mimix will be, too. It’s better for us to focus on the secret sauce that only we can deliver than to spend time recreating things that already work. This is not to say Mimix is a DIY toolkit. Quite the opposite. The necessity to DIY everything in the open source world is one of it’s greatest failings. To the user, Mimix offers a fully integrated environment with a single multi-platform installable. The application is fully supported and documented by us. But, under the hood, there are many standardized pieces as well as custom code.

There was never a market for Xanadu as conceived by Nelson and this has been proven by history. Mimix, however, is not a product in search of a market. It’s a solution to a problem that I, personally have, and so do millions of other people. Whether you write creatively or professionally, most people still rely on written communications. Some of them are content creators and even more are content consumers. Mimix is a superior tool for all of them. Mimix is also entirely free to use and to remix without restriction, something Nelson would never have agreed to for Xanadu. (Where’s the “lib” in his Computer Lib?) So we have no intention of selling the product; we’re giving it away. Because it meets a widespread need and has zero cost barriers to entry, we think Mimix will be quickly and rapidly adopted by many different kinds of users.

You are right that pluralism and document versioning are imperative. But instead of hiding your source materials behind your writing, Mimix makes it clear which version of reference materials you used. The writer is free to choose the sources of canon that he or she likes, but those choices are visible to the readers, too. The readers can also apply different canonical sources to the original author’s text and immediately see how the “facts” of the document are changed. My favorite example is Who shot JFK? Which sources would you choose as canon?

What part of my RegEx comment do you disagree with and why, of course? I’m truly lost here. It’s a horrible parsing language and I hate it. It does, however, work. I mention RegEx in contrast to Nelson’s tumbler system which was entirely unworkable. I’d much rather write RegEx to find some part of a document stream than to try and wrest it out of Ted’s tumblers.

I’m glad you asked about versioning canon in streamsharing. It’s not only a thing, it’s foundational. A reader *must* be able to see the sources the author chose in parallel with the author’s own writing and with this new reader’s own writing, as well. Anyone doing serious research will undoubtedly want to have multiple sets of canonical documents available at once. One of the awesome things about the Mimix UI is that it will make these differences easy to see and adjust as you work on your own writing. No other type of product has ever been released with this kind of capability. It’s why our slogan is Think Better.

I would say that word processing technology from the 70’s and 80’s is really *all* we have. The last great improvement in word processing was spell checking and that’s from the 80’s. Microsoft Word hasn’t changed *what* it can do, only how it does it. Today’s word processing, blogging, and research tools don’t actually add very much smarts to your writing. They’re kind of dumb. They don’t know what you’re writing about or where you got your facts. They don’t help the next person, the reader, in any way — even though nearly all writing is meant to be read by others. I would say that the word processing industry hasn’t advanced at all since it first came online. And that’s sad.
If you have a campus of 10,000 students, Mimix makes terrific sense as a way for teachers and students to collaborate. If you have a company with 30,000 desktop PCs, there are definitely some documents you’d like everyone to have *and to use correctly* in their own work. So these enterprise users, the earliest users of word processors, will also be the biggest early adopters of Mimix, we think.

Yes, streamsharing is definitely inspired by Engelbart’s demo, although what he showed was more like live collaboration on a shared document. Streamsharing is different in that each reader has his or her *own* copy of the original document, along with the reader’s notes, original writings, and other reference materials. With Doug’s demo, as with Google Docs, everyone is editing the same file. With Mimix, you’re editing your own file.

Thank you again, Stephan, for your well considered comments. I look forward to further dialog with you. When we’re ready for a closed beta, I’d love to have you be part of it. I’d also like to publish this thread in my blog so it can spark further participation from the community.

askSam was an early research and note-taking tool I used while working at Borland in the 90’s. It was one of the first programs to offer a fully searchable “free text” database, allowing you to retrieve information without formatting it into fields.

And Stephan’s further reply to me…

You are absolutely correct that market attention has moved away from text processing […] much of the information we use in daily life is textual […] fake news and rampant manipulation and abuse of public information, it’s more important than ever to be sure that one’s writing is entirely factual and can be backed up with reliable sources.

Text has some very beneficial features. I don’t think that a world entirely relying on audio recordings and commands (probably more like utterances/noises and less spoken language that tries to be compatible with text) is impossible, but it’s not here yet. There are many great, new approaches to address the solving of complex, urgent problems, but without good tools for text, which should be much easier to build than solving complex problems, I’m a little pessimistic that we’ll be too successful with that, if we can’t or don’t want to address the simple things as well and cripple ourselves in that respect.

I don’t think that fake news is a category of particular usefulness. As if there’s true news, as newspaper publishers want to convince us. Journalism, especially good, quality journalism, is in a crisis, but that too has a lot to do with what and how people want to read, and the tech of course. Good journalists have to decide what to report on and what not, what narratives and orders to construct, who can and is going to check the sources anyway, and they’re biased just as well. The reader generally lacks sufficient media competency, that’s not taught at school, wonder who’s responsible for that. The social processes encountered around it, if people fall for it or are operating with media on that intellectual level, the problem isn’t that fake news exist and that some can’t detect it, it lies elsewhere I suspect. Don’t have really an answer/solution for it, would be some work to look into it.

> > […] One of the primary goals of Mimix is to bring reliable, verifiable information to all levels of writers […]

That’s great, we can never have enough or too much of that! Not going into a lengthy meta-conversation here, I would assume you know already about the many questions and difficulties associated with such a goal.

> > I’ve heard both that Ted saw NLS and that he didn’t. […] so we have to take his retellings with a grain of salt.

Sure, I don’t disagree here in regard to historic accuracy, but towards if it is relevant. Ted certainly got some early ideas about things appearing on screens from the movie-making of his parents, Doug from the radar screens. Oscilloscopes and TVs were around for some time. It surely is genius of both Doug and Ted to arrive at the idea that a computer could generate these images, but I wonder if it matters that much what was first by a few years or so, as the general idea has been with some a little longer. One major difference is that Doug managed to actually build it technically and projectwise/organizational/funding (but so did the radar equipment manufacturers, just with analogue electronics I assume) and others like Ted didn’t have other options than doing mockups.

I’m all for crediting Doug, one could say that he produced a proof-of-concept that made it a lot easier for others to accept the general idea, as the 1968 demo famously inspired most of modern approaches to computing. Ted certainly wasn’t able to do that with his mockups and if he were, one has to ask how the differences to Doug would have produced a rather different outcome.

> > […] I mention the blockchain because it does represent an authenticated source for digital data and one that can be accessed and shared repeatedly while still remaining verifiable. I added the word Bitcoin to the sentence to connect the idea of the blockchain to its most familiar implementation. […]

Makes sense, just want to find out what you’re proposing, building or interested in, in terms of technical substance.

> > Regarding rule #4, […] is that we all do this everyday with regular research and writing tools. […] There are many corporate and academic data silos that could be opened up with a tool like Mimix without expecting the authors to get involved or to rewrite them.

Makes sense too, it’s some kind of “rescue” operation to make the body of material we already have actually useful. I’m on that with digitalization and proofreading attempts + semantic annotation for rescuing stuff that’s pre-digital, the born-digital material needs similar rescuing (or could be called “augmentation”) as well. I’m just more recently in the camp that worries about new stuff we write digitally today, is it going to be more material that will need rescuing later in lack of proper writing tools and methods to organize publishing (for example, avoid/prevent silos to begin with), is the pile just getting bigger, or can we do something that Mimix isn’t needed for new stuff, only for our legacy collections? Might be contrary to your business/project, I understand, but in my mind, both are needed and could/should work together beautifully to bring about a better digital future for all of us.

You must already know (about) Frode Hegland, he’s into this sort of thing. He’s more concerned about “documents” (academic ones), I look more at text tools that are agnostic to their context of use and can be adjusted/incorporated, but it’s more along the lines of universal cultural techniques or general, common capabilities a knowledge worker or basically everyone could use and benefit from, no matter what the task at hand is and no matter if no specialized, specifically tailored application exists for it, but from generic building blocks you could get close to it even without a dedicated, specifically designed solution, that’s the plan and difference at least.

> > Rule #7 (These are Ted’s rule for what he wanted — not my rules.) […] The Declaration of Independence should not be two-way linked with every school child’s paper that’s written about it. […]

Sorry, was already worrying that my “Erm…how?” would not be sufficient. Sounds like in your mind, we can’t get to Xanadu for this reason (potentially among other reasons). I don’t think it’s impossible, if links are published as independent resources that don’t require the cooperation of the original author or server/provider of the base text, so that they can be found/crawled to compile a list of the incoming links. In addition to that, clearly something organizational needs to be in place as well, maybe what the “federation” movement is proposing, see Ward Cunningham’s Federated Wiki for example. People could organize the links, and you could filter if you want to only see those approved by the author of the piece, or those recommended to you by friends/colleagues, what other readers voted up, or any metric or combination of those. Sure, it’s much more work, but it could also provide invaluable benefits.

If you were proposing that we should get to Xanadu, my “how” simply asks about how you would go about it technically, the “erm…” supposed to indicate that some serious consideration is needed to come up with a concept that could work. It’s not something we can easily demand/build (organizational, methodological), that’s for sure.

> > Rule #10 […] I’m talking about your individual, everyday writing. […]

Oh, yeah, sure, I see. Doesn’t need to be globally unique until the act of publication, and I too have some kind of infrastructure in mind that would handle/organize the many resources, connected with a bootstrapping of semantics, similar to the OSI reference model (just for applications and the system to “magically” know what’s the right thing to do).

> > Rule #13 […] In Mimix, you have your own complete copies of all the documents you reference. […] but nothing tells you when you refer to it that the copy you have is outdated and a new one has replaced it.

Totally agree, I’m big on local copies as well for reasons as offline use, avoiding 404, etc. I also would want to be able to legally publish them again, so if the official source vanishes, I need to have permission to distribute it again, to others in my circles if they want to look at what I was working on or publish my thing (and then what is it worth if the sources have vanished), and to offer that to the entire world again as a posterity feature to prevent loss.

For the indication of updates, I found some earlier work on that, for academic PDFs. I could look it up again if it’s of any relevance/interest.

> > […] So I’d argue that we’ve had free software all along. One thing surely isn’t in doubt and that’s that free and open source software is the future of the entire industry.

It’s indeed quite difficult and questionable if copyright should apply to software, only with a lot of work and persuation some parts of the industry (and publishers as well for their somewhat different mechanics) was able to establish and maintain the notion.

> > On number #18, […] And despite literally writing the book on this method, he never mentioned that Xanadu would have any of those capabilities. His favorite demo of Xanadu is parallel reading of two passages of text with interconnecting lines. That’s it. It’s not a very advanced view or a very nice way to read and write and we won’t be adopting that UI for Mimix, lol!

Very true, that’s probably because of his artistic background and ambitions in contrast to a technical engineer (which is what he and others lament, the techie perspective on things), trying to create pieces of art, which then can passively consumed as experience, but I care more about things that can do things. Especially the hypertext pioneers (including the modern ones) writing and publishing books in the traditional way is totally annoying me.

The demo(s) for parallel view, you know, I have some background on that (diffs in software development, comparison of the development of texts, philology) and when looking into his demo, it only works this beautiful because it’s carefully constructed/curated. Not that more messy comparisons invalidate his demands, I wouldn’t dismiss any of it and are quite glad that he made and continues to promote them, for the reason that I don’t have to come up with such stuff myself and have something out there I can point to, not being alone or to have to do the advertising for the vision, which would be horrible if I had to do it. Regardless of the alleged primitiveness or issues with making it real, I think there are important insights to derive from what Ted does.

That also goes for hypermedia, he made some early contributions to raytracing (but so did van Dam for computer graphics), filmed, does his audio recording all the time and surely thought/thinks about how the computer can help with that. There’s just a limit how much one can do at the same time and with the necessary depth to make it substantial, and without the technical ability/interest he’s somewhat limited in terms of what his demos can become in comparison to those of others, but his intuitions and designs are always worth a look, and be it precisely because they offer different perspectives and challenges to the status quo we otherwise would all too soon and willing to be statisfied with.

> > […] MSL, Mimix Stream Language. […] Since the product is open source and the MSL language is simple […]

Do you have already something out, did I just miss it? Not that there needs to be, “open source” can also come to the user/me when buying the product, and of course it could be too soon, not ready yet or something.

> > […] not the spying or advertising business.

The bad part might be that law enforcement might make demands, and if there’s no target in general libre-licensed technology and widespread use, they can’t do much about it, but companies of course are registered and operating in a legal juristiction, and that’s how law enforcement can demand the compromization of user privacy/confidentiality. Also, if users use the system to peer-to-peer-encrypted sharing of data, there could be demands to delete/prevent certain instances of it or liability for “facilitating” it, mainly because politicians don’t care or know how the tech works, and there can be social/societal demands in whatever constellation of what needs to be enforced or is acceptable or harms image, etc.

> > […] The necessity to DIY everything in the open source world is one of it’s greatest failings. […]

What can you do if those who have solved it already don’t want you to use it, or try to restrict some of the people or groups, so the option is to either not care as long as you’re not affected and leave the other poor guys to their fate, or solve it once and for all for everybody.

I meant “integrated” in terms of a coherent design/interface, or components optimized/adjusted to work with each other dedicatedly, in contrast to the friction/unnecessary inefficiencies of non-integrated systems that were glued together and only sort-of, barely or good enough work. Not that it must be a particular approach in my mind, but I guess we prefer things that were specially, dedicatedly made to help with the task, on the other hand standardization/generalization is a nice effect as well.

> > (Where’s the “lib” in his Computer Lib?) So we have no intention of selling the product;

Heh, that’s a historical, nostalgic, now conserved notion of liberation, liberation from the big companies who control access to the computers of the size of a room, and the modernized translation by Ted is that it is now Facebook and the web that restrict what people can do.

> > What part of my RegEx comment do you disagree with and why, of course? […]

Not of strong importance, but I in general don’t like the use of cryptic symbols with special meaning (also in programming language, overload special characters is just a bad idea when there are names and keywords for constructs anyway). It’s great if you know it well and then it’s much shorter, but usually we use words to express/hint what the thing might do. Who knows if the people in the gaves a few thousand years ago were basically doing some RegEx on the wall 😉

> > […] One of the awesome things about the Mimix UI is that it will make these differences easy to see and adjust as you work on your own writing. […] It’s why our slogan is Think Better.

Do you have a plan or implemented already how to make that visible (render/visualize it), and how a user would work with it interaction-wise? Apologies if I didn’t read the other articles on the blog yet, it might be explained there somewhere. Will look into it.

Anybody can think different 😉

> > I would say that word processing technology from the 70’s and 80’s is really all we have. The last great improvement in word processing was spell checking and that’s from the 80’s. Microsoft Word hasn’t changed what it can do, only how it does it. Today’s word processing, blogging, and research tools don’t actually add very much smarts to your writing. They’re kind of dumb. They don’t know what you’re writing about or where you got your facts. They don’t help the next person, the reader, in any way — even though nearly all writing is meant to be read by others. I would say that the word processing industry hasn’t advanced at all since it first came online. And that’s sad.

Yes and no. In some aspects yes, Word and the paper simulation and all of that is of course stuck on what was pioneered as the “modern wordprocessor” as we know that category of software today. I’m, personally, not referring to smarter features either, of the tool to know what the piece is about (towards AI/reasoning/targeting) or text/language-based advances of that era (dictionaries, recommendations, what they were doing in lack of graphical interfaces), but in innovations on how to operate/process (“process” in the old meaning of the term, analogous how they “processed” data) text. For example, there are no good XML/annotation editors that would help to structure text. We have big and powerful and expensive XML tools, but are in no way flexible enough to easily design special-purpose interfaces that would allow markup/annotation for very specific tasks, it’s difficult to visualize the annotations that are there already, one can easily mess them up and all of that. Or old word processors had cursor markings that would allow operations to be enacted on a span of text, currently I have to select/drag such spans with the mouse, always being at risk to accidentally loose my selection, with difficulty to correct it if I don’t get it perfect the first time, in an interface that confuses/mixes many different matters of concern such as typesetting/layout, so things break if I move or split or change any of the underlying text by such methods. And then there are potential innovations around those things that would be entirely new.
Some of the dedicated word processor machines (potentially analogue) and early hypertext systems had some of it, so there was at least the option or general notion, but today, all of that was lost to the mainstream paradigm, and now we have a lot of people creating new material or messing with existing material with improper tools, that are a pain to use and create a lot of unnecessary extra work. Not that the earlier times were perfect, but I hope you get the idea.

> > […] Streamsharing is different in that each reader has his or her own copy of the original document, […]

Great, great, great, I’m in that camp as well!

> > Thank you again, Stephan, for your well considered comments. […] I’d also like to publish this thread in my blog so it can spark further participation from the community.

Thank you for replying! I generally tend to look into things, as long as somebody cares, I also engage in whatever can be reasonably done. If you intend to publish this conversation (if I understand correctly), I would like to curate it so it becomes more convenient, maybe an example/experimentation about the things we’re talking about (more hypertexty), maybe to develop tools that help with such activities, but maybe Mimix is already there and you can use the text/conversation to apply Mimix to it. And there’s the question of licensing of course, which I would like to be libre-free (for people to create their derivations, translations, print it, sell it, etc.), where another question would be what that means for opinion pieces (prevent derivations to prevent misrepresentation or some of these concerns one might have), per default I usually would pick Creative Commons BY-SA 4.0 despite of it’s problems for such kinds of works. Do you have an aversion against the legal stuff and don’t want to bother, or is it an opportunity to practice and try a few things, for example, by libre-licensing me permitting you to do anything you want with my part, as long as the result remains libre-free?

Where it all started. Vannevar Bush published an article titled As We May Think in 1945. As 2019 approaches, we still can’t do what he proposed. The Mimix Company takes its name and inspiration from Bush’s hypothetical machine, called the Memex.

Some Thoughts…

My takeaway from this is that we’re lucky to have such passionate and well-informed people taking an interest in our product at this early stage. In open source, community is everything. Second, and I’ve seen this from several people’s comments, it seems that I’ve left the impression that Mimix is actually recreating Xanadu or using any of it’s models. Let me correct that emphatically. Mimix is inspired by computing pioneers, including Ted Nelson, but we are not using any of his methods, either for internals or for the user interface. What we are taking from Nelson is the inspiration and commitment that the world needs better tools for research and writing. We like to think that Mimix offers many of the most desirable qualities of systems like Vannevar Bush’s Memex, Doug Engelbart’s NLS, and Ted Nelson’s Xanadu, but we do it in a modern way with up-to-date technology and tools that fit the way people read and write today.

I’ve also noticed while trying to piece this post together how horrible the whole process is. The formatting of this “final” document is just awful and would require a lot of work to get into something better looking. It required a lot of work to get to this stage! Stephan laments this, too, when he mentions curating the content and adding hypertext links.

I do want to address a couple of other areas he mentioned while I’m here in a public forum:

Text contains information that audio does not, most importantly a visual component. While most people “read aloud” to themselves and translate text into speech when they read or write, there is the additional benefit that a body of text takes on an organization and a visual layout. How often do we remember that something we read or saw was “about this far” into a story or “at the bottom of the right page?” All the time. This visual and chronological referencing ability of text gives it lasting value, even in the Alexa age, for audio information cannot be so easily indexed. Stephan is right (along with Vannevar Bush) that we need better tools for text if we are to solve complex problems using it.

Fake news is what we used to call bad journalism. Specifically, it’s bad short-form journalism because fake news makes into a long form less often. Nonetheless, even with book-length works, fake news or poor information sourcing and lack of transparency impoverishes the quality of the written word we consume today.

While there are certainly some “smarts” to the internal operation of Mimix, this particular aspect of veracity doesn’t require any rocket science or AI to achieve. An author simply chooses his or her sources, which are revealed to and usable by the reader. The example I like is The Louvre. If you were writing about paintings in their collection, their data could be assumed to be correct. Even if it isn’t or an outside source disagrees, at the least both the reader and writer have some clarity about where the facts came from. One of the smart things Mimix does is tie together the references from the Louvre when you, the author, choose to refer to them in your writing. Then we share those same references with your readers. Both the research/writing and reading/research sessions happen in the same environment, and this is what makes them so valuable.

Any researcher worth his or her salt will have to consult multiple and sometimes conflicting sources. In writing about the nuclear industry, I’ve found it very useful to compare the industry’s explanation of the facts (of an accident, for example) with those of outside journalists or individual writers in contra. Often they report the same facts with different numbers! How valuable to me in my own writing it would be to see where I got each (or all) of my numbers and to be able to change those references while I’m writing and have the system keep track. Using the same system, the reader can choose his or her own references for the same facts, which would affect that person’s reading of the document. This is a new kind of intellectual interactivity and “collaboration” that takes place individually while creating your own materials.

With regards to the development of graphics technology, neither Ted nor Doug invented those things but they did envision using them in novel ways. Ted wrote about his fascination with digital graphics in his books. Doug, as you say, had access to SRI’s military graphics tools. Both men were already working in the digital domain and well past analog computers, but their digital systems had slow processors, small memories, and limited storage. Hat’s off to Doug for trying to turn these into a UI which inspired the Xero Alto (where he later worked) and later became the Mac and Windows graphical user interfaces known today. That stuff is directly traceable to Doug. I give more credit to Ted, though, for thinking about how the information itself would need to be organized and accessed. He’s into documents and their related meanings. Together, both men (and both inspired by Vannevar Bush) gave me the impetus for what Mimix is becoming today.

Stephan mentions the ability to “rescue” old documents from their paper silos and proprietary systems, and this is absolutely a goal of Mimix. We can’t expect all the world to change their writing formats, especially of existing data, and The Mimix Company will offer services around this need. We do, however, hope to inspire writers of new content and new projects to consider starting with Mimix so that data doesn’t need rescuing later.

Regarding links… this has become a topic of confusion because of Ted’s use of the terminology. Mimix does not have links. Mimix has only atoms, views, and streams — which are sequences of atoms and views. Think of a machine that, instead of compiling what you do, simply records it, ala Bush’s Memex. There are a million tools for compiling a stream of text into something else — anything from the neighborhood map of errands that Engelbart shows in his demo (I love that he has to stop by the library on the way home.) to a complete biography of everyone mentioned in a book. Rather than linking to text, we simply include it in the stream. Some of the text we read or write may be considered canonical, and this is connected both ways with the text that refers to it. We don’t call these links because they are interchangeable with other canon, all of which is recorded in the stream!

On patents… Yes, I’ve crossed over. After writing two myself, seeing Google and other large companies use my work (at least some of them cite me, ha!), and not earning $1 from my most important patent, I’m skipping that bullshit with Mimix. Stephan points out that software publishers use “different mechanics” to make patents work. Is that ever true! The different mechanics being lawsuits and patent sharing agreements. Together, these interlocking ideas ensure that only the largest companies with the biggest legal war chests can play the patent game.

We are not yet ready to show our developments in MSL or in our viewer/editor code, but I promise there will be a Github when we are. I’m excited to get feedback on the MSL language because it’s the foundation of everything that we’re doing.

Stephan brings up an important part about privacy and that’s fundamental to our design goals. As he points out, free and open source software can be run by anyone on his or her own software so The Mimix Company isn’t involved in who uses the software or what they do with it. Even more important, the Mimix software will have baked-in encryption which even we cannot open. The only person who has access to what you read or write with Mimix is you and anyone you allow. While this places additional custodial oversight on customers (we can’t help you if you lose your keys), the benefits of not being able to help others who might want your data outweigh the extra efforts needed to protect your keys.

Mimix provides an integrated environment in two ways: First, our software will fully comply with all our documented standards for the MSL language (obviously). It will read and write valid MSL code and not use undocumented or proprietary file formats. This makes it easy to use our tools for any work you might do with MSL in any project, even using a different editor or viewer. You can still use the tools we provide without breaking anything. Second, the Mimix editing and viewing environment is fully self-contained and runs identically on Linux, Mac, and Windows. All aspects of reading, writing, organizing, backing up, etc. take place within our custom interface, which is specifically designed for correlating large and complex sets of data.

The Mimix UI will draw from much of my previous work in user interface design, semantic data, and domain-specific languages. It will be the first application of what I call Nine Eyepoints Design, a technique which builds interfaces around the spatial locations that we use in our heads (above, below, etc) when organizing every day stuff. I’m looking forward to talking more about Nine Eyepoints Design in a future blog post.

As always, my inbox is open to you, too, dear reader. Write me at david@mimix.io and let’s continue the conversation!

— D