🌿  back

knowledge probes (prototype code)

A set of general prompts to help you reflect on ideas, solve problems, and find new answers.

Take a moment to picture the following prototypical narrative. “Why is the sky blue?” the curious child asks. “Well, sunlight passes through the atmosphere before it gets here, which makes the sky appear blue,” answers the parent. At this point, both parties seem content with the exchange. Several moments later, the inevitable happens. “But why?” the child asks. Somewhat frustrated, the parent conveniently wraps up the conversation: “Because I said so.”

There are a couple of remarkable things to note about this narrative. First, the child manages to challenge the knowledge of the adult without possessing that knowledge herself. This is quite different from the situation in which a teacher is challenging the knowledge of a student. In this more formal setting, the teacher is very much aware of the established body of knowledge. In contrast, the very premise of the opening story is based on the ignorance of the curious child. It appears that we have a deeply-rooted drive to answer questions, which often requires us to draw on our own knowledge. This innate tendency is called instinctive elaboration, and it enables questions to force your brain into a relentless search for answers.

The second remarkable thing to note about this narrative is the simple nature of the questions. They are far from being elaborate descriptions of the requested information. Even a simple “Why?” would suffice in challenging the adult. Despite their minimal contents, the replies are effortlessly understood. The reason for that is that they are genuinely soaked in context, following the unwritten rules of pragmatics. The ongoing dialogue infuses each reply with meaning, enabling speakers to cut down on words without sacrificing the contents. This state of affairs makes it surprisingly easy to play the challenger, as many parents may be particularly aware of.

Given how effective the child is in challenging the parent’s knowledge, could we promote critical thinking by embedding her behavior into a tool for thought? Could we incentivize people to actively reflect on their own beliefs by allowing them to converse with a “curious” machine? Even if the user wouldn’t actually receive new information in the exchange, the very act of highlighting gaps in their knowledge might be valuable. Such a tool could be used to challenge faulty beliefs, incentivize deeper understanding, and make assumptions salient.

Those objectives are key to changing our relationship with hard questions into a healthier one. Annoying inquiries turn into opportunities for growth. This paradigm shift is fittingly captured by the concept of aporia.

Aporia is the feeling of realizing that what you thought was a path to truth actually doesn't lead there at all. A shortcut to certainty has revealed itself to be an illusion. The first reaction to aporia might be frustration and even anger, but if you consider that it's providing new information and could be saving you from wasting additional effort maintaining false certainty about an existing belief, it can flip into an Aha! moment that is even enjoyable.


After changing our perspective on hard questions, we can finally start building. Being inspired by the unreasonable effectiveness of the curious child, this tool will consist of nothing more than a set of questions and a basic method for sampling them. Difficult questions. Vague, muddy, demanding questions. Questions which genuinely get the person thinking. Revising, reframing, reviewing what they hold to be true. Questions which probe the otherwise obstructed depths of knowledge. Given their current purpose, we’ll also refer to these questions as knowledge probes, or k-probes for short.

To integrate a minimal level of structure into the question set, we’ll use Bloom’s revised taxonomy as a starting point. This taxonomy is a widely used system for organizing learning outcomes across all levels of formal education, from kindergarten to university. These outcomes essentially capture the abilities which students are expected to possess by the end of a lesson, course, or programme. Formal education can be seen in part as a process of internalizing these abilities.

The taxonomy consists of six broad categories, exemplified below with intended learning outcomes from the degree I’m currently pursuing.

Due to the popularity of Bloom’s revised taxonomy, there are a lot of online resources containing examples of learning outcomes, complete with suggestions for classroom activities. After compiling examples from several such resources and rephrasing them as questions, I used a text generation service to extend the question set even further. Following several hours of co-creating questions with the machine, the total number of knowledge probes surpassed two hundred, a sample of which can be found below.

As in the case of the curious child, the value lies in the dialogue itself, rather than in the questions. Therefore, we need a method of stitching together questions with user replies, fostering an ongoing discussion. Fortunately, this method can be as simple as a chat-like textual interface which alternates between printing output questions and reading input replies. The unwritten rules of pragmatics will do the rest, helping the user instinctively turn the sequence of questions and replies into a meaningful dialogue by filling in the gaps.

As a final layer of complexity, we’ll also use Bloom’s revised taxonomy as a basis for a more customizable experience. By specifying a numeric weight for each of the six categories, the user can adjust the probabilities of certain questions being asked. Larger weights will result in more questions being sampled from that category, relative to others. This way, the user can better tailor the tool to their current needs. Or, perhaps, to their intended “learning outcomes.”

dialogue sample

The following is a short excerpt discussing the technology behind semantica:

userLet’s talk about semantic embeddings.
k-probesWhat field is this part of?
userComputational semantics or natural language processing, depending on who you ask.
k-probesWhat is this a consequence of?
userThey’re the result of exploiting word co-occurence information derived from large corpora.
k-probesHow would you explain this to a teenager?
userImagine you’re planning on traveling to a foreign country. The catch is, there’s no map available! You see, it’s quite an obscure country, perfect for a curious teen. The only thing available is a list of past itineraries – lists of cities which other travelers have visited before, in order. The interesting thing is that you can use those itineraries to sketch a map! Cities visited one after another should be near each other, right? You could try to come up with a map which matches the distance relations derived from the itineraries, and you’ll get a pretty accurate map you can use on your journey. In essence, language is like a set of itineraries, called sentences. However, instead of cities, it’s got words! I just took you along a journey through the country of language, traveling from one word to the next. It turns out that if you try to use the same reasoning here, you can actually pinpoint words on a map, based solely on a lot of text. Not a map of cities, mountains, and rivers, but a map of meaning. Semantic embeddings are the “geographical” coordinates of words in this country of language.
k-probesWhat issues can this help with?
userThey can help machines extract meaning from otherwise plain text_
k-probesWhat other concepts are related to this?
userText mining, natural language processing, vector semantics, lexical semantics.
k-probesHow can this be a source for inspiration?
userIf words can be represented as points in a space, what else could be represented this way? The effectiveness of semantic embeddings gets you thinking about what other things can be expressed like this.
k-probesWhy is this relevant?
userIt’s the most popular approach to extracting meaning from text. It drives most, if not all, of modern natural language processing efforts.
k-probesHow do the components of this relate to each other?
userEach word is associated with a vector. All vectors inhabit the same space. There’s a relation between any two words, represented through the difference between their vectors.
k-probesWhat has to be in place before this can be applied?
userA lot of text for obtaining the embeddings, and a broader understanding of semantic embeddings in academia and industry.
k-probesWhat can this lead to?
userMachines which can reason about relatable concepts. Tools for thought as well.

random sample

Use the following button to randomly sample one probe from the collection.

The probe will show up here...

final thoughts

The simple nature of this tool might be deceiving. Its beauty lies not in its codebase, but in the way it builds on quirks of the human mind. Pragmatics helps with coherence. Instinctive elaboration triggers an automatic drive for engaging with the knowledge probes. The conversational medium even makes the experience feel social.

Despite the potential benefits of this approach, it’s also worth considering its downsides. The main disadvantage is the lack of rich feedback, which has otherwise been shown to be very effective in learning. However, one could argue that the increased ease of adapting the tool to new fields outweighs this shortcoming. Moreover, the self-supervised nature of this approach might still provide a feedback signal which is strong enough to be useful.

We have a unique relationship with questions, so why not leverage that to our advantage? Knowledge probes are an early attempt of explicitly doing just that. I’ll predictably end with an open-ended question: “How can knowledge probes be helpful for you?”