expecting unexpected ideas
If you’re reading this via RSS, you might be interested in the RSS feed I recently created for project write-ups, complementing the original article feed. Adding it to your aggregator would mean having new write-ups like the decontextualizer’s show up among the articles you read.
Some discussions I’ve had this past week reminded me of a certain challenge faced by personal knowledge management solutions, in the more general sense of Zettelkasten, digital gardens, and second brains, rather than Obsidian, Roam, and Logseq. I should mention, however, that not many people I know see this as a problem, so it might only be a pet peeve of sorts. The issue I’m referring to is the fact that many PKM solutions make it easy to find ideas you’re specifically looking for, but don’t make much progress in helping you find ideas you didn’t even know you were looking for. They make it easy to index notes in such a way as to find them easily later on, to act as a librarian or cartographer of knowledge, but don’t help you as much as I’d like to in finding truly unexpected solutions and connections. The nuance is pretty subtle, so it might not make complete sense yet. The rest of this article explains in more depth what this contrast feels like, why I think open-ended search is a valuable affordance, and how we can bridge the gap.
Let’s start with linear paper-based notes as a more primitive PKM solution, to make the later contrasts more obvious. If you want to find a specific note based on something you’re thinking about, you first have to (1) remember there’s a note touching on that, and (2) remember where it is in your notebook. Your main other way of finding ideas is to open the notebook at an arbitrary place, where reading it through from the start is a special case.
In building towards modern PKMs, there are two directions we can move in. We can either make the step from analog to digital, or from linear to non-linear. Let’s first move to digital and then we’ll also go the other route.
If we move to linear digital notes, then besides looking for notes in specific places or at random, you also get search. Full-text search gives you the useful ability to retrieve notes based on a tiny fragment which you’re thinking about. If you want to get to a note on the concept of value iteration, you simply search for that and find items which contain it. Rigorous tagging and consistent phrasing usually help. Essentially, the full-text affordance enables content-based addressing, complementing the previously available location-based addressing. You don’t need to know exactly where to find a note, what its file address, URL, or page number is, because you can roughly rely on its contents.
Now, if we step back and move from linear to non-linear notes while keeping the setup analog, we get to Niklas Luhman’s iconic Zettelkasten. What many argue is the defining characteristic of his PKM setup is the fact that ideas could be connected to each other in more than one way, compared to the single “chronological” connections offered by linear notes. He was hyperlinking documents on paper before the internet was a thing. This representation of knowledge allowed him to indefinitely persist the connections which made up his trains of thought at different times, his different lines of reasoning, reminiscent of the trails envisioned in Vannevar Bush’s Memex.
But I don’t think this is where the value of non-linear notes lies at all. I think they’re brilliant not because they allow you to replay a set of trains of thought, but because they allow you to systematically change tracks, to remix those lines of reasoning by shuffling their constituent steps, to make it possible to get from one idea to another through new routes. It’s an artificial way of moving from one human thought to another, based on following links saved at different times in the past, links which might or might not match your current trajectory across the space of ideas. Integrating such artificial components into their thought process helps Zettelkasten practitioners break out of their frame of mind, tapping into previously inaccessible perspectives. Luhman wasn’t that interested in finding ideas he was explicitly looking for in the moment, like a librarian might, but interested precisely in finding ideas he was not naturally expecting. Probably due to our human tendency of attributing agency to unpredictable systems stretching back to deities of weather, Luhman famously conceived of his Zettelkasten as a conversation partner. The simple trick of remixing his own lines of reasoning lead to an entity he perceived as having a thought process of its own, different from his – extending his.
If we connect the digital and non-linear dots, we get modern day PKM solutions. Even if not all of them explicitly aim to marry Luhman’s Zettelkasten with the digital affordances of search and online sharing, I feel it’s a significant part of the vision of many thoughtware engineers today. Hyperlinks evolved into transcluded blocks being rendered into different trains of thought in sync as embeds. Backlinks showed up to essentially increase the number of user-made connections by two in an instant. There are, however, a depressingly small number of noteworthy exceptions – start-ups seeing the same needs and opportunities I’m trying to hint at in this article, which I’ll mention later.
Now that we’ve arguably covered the best PKM tools available at the moment, let me attempt to provide a glimpse into what lies beyond. The opportunity lies in the fact that remixing your own thought patterns only gets you so far. Regardless of how you move around your knowledge graph, the entire structure is still based on connections you’ve defined yourself. The way you relate different concepts is baked into the very links you write by hand. You choose how many links, what words they start from, and to what others they lead. If we’re to extend our thought patterns further beyond native human cognition, to tap into a broader range of ways of thinking about the things that are meaningful to us, I’d argue that we need to challenge the biases we introduce through explicit links. If we want to both keep the thoughts human and access new ways of thinking at the same time, we might have to inject a bit more artificiality in the way we relate them.
Grandfather, Omeir thinks, already I have seen things I did not know how to dream.
If you’ve spent even a modest amount of time on my blog, the concrete approach I’m about to suggest for achieving that should be anything but unpredictible. ML models which have been trained to predict and reconstruct texts as an operationalization of the task of understanding, have learned how different concepts relate to each other, as a seemingly instrumental goal in achieving their objective. Based on thousands of books, millions of web pages, and an uncomfortable number of Reddit comment threads, this family of ML models internalized rich representations of how ideas are related, including information on how and how much they are related. We can milk (i.e. distill) this knowledge out of giant models like GPT-3 in different ways, a popular one of which is as semantic embeddings. As I explained here and here, semantic embeddings are numerical coordinates attributed to documents (e.g. notes) by ML models in a space of meanings so that related documents are close to each other.
A valid argument against simply surfacing notes which are related to the current one in such way is the lack of information encoded in the relation, just standing for “sort of related, somehow.” It’s true, we haven’t come up with effective ways of distilling the high-dimensional spatial layouts encoded in semantic embeddings in a cognitively ergonomic way for humans to digest easily, but many researchers in explainable and interpretable AI are working full-time on that.
One way of coping with this challenging lack of humanness in the representation is to grant the person the ability to at least select the sequence of words which the “link” originates from, and then look for notes which are related to the user selection given the artificial model of the world encoded in the ML model. If I was looking at a thought about highlights being an easy way of persisting knowledge, selecting “persisting knowledge” might surface ideas about memory, while selecting “highlights” might surface ideas about salience. Though you could aim to detect a finite set of discrete types of relations like “supported by,” “exemplified by,” or “challenged by” and surface notes accordingly. This could be useful in specific contexts, but again we’d be throwing familiar perspectives at the problem out of discomfort with non-humanness. Though it’s not really non-humanness, because the models have been trained on human data. Rather, we’re met with the challenge of systematically integrating perspectives of millions of people into our own. It’s too much humanness in a sense, so much it’s overwhelmingly non-trivial to handle. We’d have to find clever ways of injecting thousands of years of human experience into strategic moments, in strategic ways. Unlike I thought previously, having a noisy model of the user’s worldview is in many ways not a bug, but a feature, due to the value of those divergences.
Toy example depicting divergences between user and machine in relating concepts to each other. When I saw how the modular Origami ensemble on the left turned out (a), it reminded me of an illustration of how ideas can be seen as composable lego blocks (b), which I saved in my conceptarium while reading an article on the topic. I thought that it would be fun to try to retrieve that picture using a photo of the Origami as a query, checking if the conceptarium also “saw” the connection. It turned out that the conceptarium saw a different connection as stronger, one to a visualization of a “low-poly” structure encompassing a handful of clusters I saved while playing around with an interactive demo (c). Completing the visual analogy then leads to an embodied understanding of high-dimensional clusters as physical containers. My initial curiosity also stands as a testament to our expectations of human-machine interaction being still in their infancy.
Another argument against using ML models as an artificial idea-relating machine is the fact that they’re pretty static. One model learns one representation of the world. An incredibly rich and comprehensive one, but still singular. Expose it to different data, and you get a different singular representation. There’s not much randomness can do to help here, but maybe a static representation which incorporates millions of worldviews isn’t too bad, if we can remix them effectively. Though this potential challenge is pretty far off – a rich digital representation of the world which you can instantly converse with is still better than none, even if worse than multiple of them.
The conceptarium integrates this artificial way of relating thoughts today. You can find both “generally related” thoughts, and also dynamically link by selecting an arbitrary fragment of the current one and using it as a query. Mind you, you can even find images by selecting text phrases or vice versa, and also find related notes which you’re especially unlikely to think of yourself, based on an Anki-like measure of activation nerfing search results. Given that it’s still highly experimental, it’s full of bugs and quirks, and you’d be slightly insane to use it as your main PKM in its current form. But some day, I think the family of tools which allow us to move beyond remixing our own thought patterns will mature and prove both enabling and reliable at the same time – tools like mem.ai, MyMind, DEVONthink, and possibly Napkin. If done right, I think this step to open-ended search of thoughts might rival the value of moving from location-based to content-based addressing through online search engines.
It could have been a decent ending right there, but I just want to mention the fact that being able to find what you’re looking for can also be extremely valuable in a few situations, especially bibliographical referencing. Luhman wouldn’t have been so prolific without a reliable system for finding which academic papers exactly lead to what networks of ideas, so that he could mention them in his published work for others to refer to. I think this is a legitimate concern and good reason to use explicit links if this sort of information is important in your work. But just to keep the book-keeping minimal as I slowly grow from an undergraduate student to an actual researcher, I have in mind a system which connects my ideas to external works by means of timestamps. “You happened to save this thought while reading this paper, and 20 minutes after you read this other one. Also, there’s this other book you’ve been reading when saving this related thought, maybe it’s relevant here, too.” A lot of work remains to be done, but I’m glad I have a rough sense of what I want from my tools, which feels like half the process.
A final awe-inspiring quote to help soak the ideas behind this article in sublimity:
Not even the most heavily-armed police state can exert brute force on all of its citizens all of the time. Meme management is so much subtler; the rose-tinted refraction of perceived reality, the contagious fear of threatening alternatives. There have always been those tasked with the rotation of informational topologies, but throughout most of history they had little to do with increasing its clarity.The new Millennium changed all that. We've surpassed ourselves now, we're exploring terrain beyond the limits of merely human understanding. Sometimes its contours, even in conventional space, are just too intricate for our brains to track; other times its very axes extend into dimensions inconceivable to minds built to fuck and fight on some prehistoric grassland. So many things constrain us, from so many directions. The most altruistic and sustainable philosophies fail before the brute brain-stem imperative of self-interest. Subtle and elegant equations predict the behavior of the quantum world, but none can explain it. After four thousand years we can't even prove that reality exists beyond the mind of the first-person dreamer. We have such need of intellects greater than our own.But we're not very good at building them. The forced matings of minds and electrons succeed and fail with equal spectacle. Our hybrids become as brilliant as savants, and as autistic. We graft people to prosthetics, make their overloaded motor strips juggle meat and machinery, and shake our heads when their fingers twitch and their tongues stutter. Computers bootstrap their own offspring, grow so wise and incomprehensible that their communiqués assume the hallmarks of dementia: unfocused and irrelevant to the barely-intelligent creatures left behind.And when your surpassing creations find the answers you asked for, you can't understand their analysis and you can't verify their answers. You have to take their word on faith — Or you use information theory to flatten it for you, to squash the tesseract into two dimensions and the Klein bottle into three, to simplify reality and pray to whatever Gods survived the millennium that your honorable twisting of the truth hasn't ruptured any of its load-bearing pylons. You hire people like me; the crossbred progeny of profilers and proof assistants and information theorists.In formal settings you'd call me Synthesist.