Responsibility
How ought one live? A question at once esoteric yet personal, metaphysical yet immediate. Perhaps one should follow a path oriented towards the pursuit of knowledge and wisdom. Perhaps one should follow a path of fierce involvement in the world’s affairs. Perhaps one should live out an Epicurean hedonism, a life oriented towards the pursuit of long-term personal well-being. Perhaps one should live out a Millian altruism, a life oriented towards the pursuit of collective well-being.
Perhaps one should live a life catering to the quantifiable demands of financial markets, and so use them as a compass that points towards “generated value.” Perhaps one should live a life catering to the legible demands of reputation markets, and so adopt society’s emergent morality as one’s own. Perhaps one should live a life catering to the visceral demands of evolutionary adaptations, and so employ pleasure and pain as ultimate custodians of good and evil.
Perhaps there is no such thing as the right way to live, a view which thrives along with the combinatorial fetish of postmodernism, yet which finds its most vivid expression in Nietzsche. His works are almost mocking: “Beyond Good and Evil,” or “The Genealogy of Morals.” He claims, through the mouthpiece of Zarathustra, that God is dead—in the broader sense of there being no higher purpose, no values worth aspiring to, no virtues worth cultivating.
But how could one have come to know that? On what grounds could he be making these claims about the nature of this intangible object that is morality? While at that, on what grounds have Epicurus, Mill, and countless others ventured claims about the nature of this metaphysical structure? Beyond crude faith, on what grounds can we evaluate theories about metaphysical objects, weigh them against each other, and orient ourselves towards theories that are most in accord with their objects?
The home of ever more appropriate theories is science. However, science excels as a means of acquiring knowledge about tangible, concrete phenomena embedded in the same causal domain as us. What lab equipment could possibly intervene on morality? On what sensors could morality possibly leave its mark? Granted, one can study entities that themselves engage with these objects, but could one accomplish contact directly?
The home of systematic inquiry into abstract objects is mathematics. However, it is not the case that mathematics is not based on faith. The edifice is typically framed as foundationalist, the naked axioms bearing all its weight. Pledge allegiance to Euclid’s last axiom, and be granted cognitive ergonomics. Break your vow, and be forced to deal with Lovecraftian horrors. That said, mathematics enables proof, helping establish the validity of inferences between statements independently of the truthfulness of the statements involved. Mathematics excels at epistemological plumbing, setting up a network of abstract pipes that propagate truth, despite not accounting for it. Yet, the pipes themselves rely on axioms burrowed deep in mathematical logic and proof theory.
Unsatisfied with the empirical and the theoretical, we explore the conceptual. On one hand, metaphysics is concerned with the study of objects that appear to transcend the physical (e.g., prime numbers, human rights, beauty). On the other hand, epistemology is concerned with the study of knowledge and means of knowing (e.g., observation, testimony, memory). Combining the two, epistemology of metaphysics concerns itself with the means of gaining knowledge about intangible objects in particular.
More concretely, given arguments in support of different views on the nature of morality, epistemology of metaphysics might provide means of weighing them against each other. For instance, one theory might be simpler, and so might get ahead on these grounds. Alternatively, one theory might require fewer resources to defend. Of course, this begs the question: On what grounds can we establish the grounds on which we can compare theories about intangible objects? Knowledge and truth seem to be intangible themselves, so epistemology of metaphysics necessarily makes itself one of its objects of study. Theories about weighing metaphysical theories should reflexively apply to themselves, as theories are also abstract structures. How simple is Occam’s razor? How defensible is identifying truth with defensibility?
Consider again the opening question: How ought one live? There is no consensus. There is only moral uncertainty to be dealt with, and active inquiry into how to properly account for it. However, if there is such a thing as how one ought to live (e.g., promoting the well-being of others, cultivating wisdom, standing up for principles), then one ought to learn, study, seek the true nature of this intangible object that is morality, in order to be able to effectively incorporate its contents into one’s life. In other words, one ought to actively strive to reduce moral uncertainty—potentially by using machines—not only cope with it in its current form. And if there is no such thing, if the very construct of morality is vacuous, we might as well know for almost sure. One ought to know how one ought to live in order to live it out, or at least strive to know better.
Operationalization
Operationalizations are attempts to ground phenomena that are meaningful to us in things we can actually measure. The notion of panic might be operationalized through the time it takes a crowd to leave a room. The notion of fitness might be operationalized through average running pace over a set distance. The notion of intelligence might be operationalized through a battery of pattern matching tests. In bridging the abstract with the concrete, operationalizations enable systematic inquiry into higher-level phenomena.
However, there are many different ways of operationalizing the same phenomena. The intensity of neural activity has been operationalized as the level of oxygen present in a brain region relative to a baseline, as seen in an fMRI. It has also been operationalized as the effect on the electric field around the scalp, as seen in an EEG, or on the magnetic field, as seen in a MEG. What about radiation? Ultrasound? Infrared? Which measure is most appropriate?
Digging deeper, we find that these are usually not brief, one-step bridges connecting phenomena to measurements. For instance, neural activity is often linked to increased oxygen saturation, which in turn is then linked to specific spectral signatures. Individual segments are also shared across operationalizations. For example, measures of exoplanet habitability often rely on oxygen levels, whose spectral signature is as idiosyncratic as that picked up by functional neuroimaging.
Ideally, we’d want operationalizations whose every link is rock solid. The two ends of a segment—one more abstract, the other more concrete—should “march in lock-step, always found together and never found apart.” A beautiful example of this is Shannon’s operationalization of information, with strong theoretical arguments supporting this as the “true name” of information. For instance, you provably can’t do better in information-seeking games (e.g., “identify the heavier marble using the minimum number of weighings”) compared to what Shannon’s ideas imply. Alternatively, you provably can’t do better in compression than what Shannon’s source coding theorem implies.
Another operationalization I’ve been fascinated with is one of truthfulness. In brief, the truthfulness of a position is equated with the absence of a coherent challenge to said position. From here, we split into two branches. First, a coherent challenge is equated with consistently winning debates against a party holding said position. Winning a debate is then equated with having the strongest arguments. The strongest arguments are then equated with arguments which are most strongly supported by other strong arguments, PageRank style. Backtracking and going down the second branch, the absence of something is equated with the presence of thorough search efforts that end up fruitless. Such search efforts are then equated with a self-improving system consistently failing at the search task.
This is a complex operationalization. Let’s zoom in on the very first leg, the one linking truthfulness with the absence of a coherent challenge. Every such link is a biconditional, with the first implying the latter, and the latter implying the former. Let’s have an even closer look at the latter direction, the claim that the absence of a coherent challenge to a certain position implies that said position is true. Following a series of syntactic manipulations documented elsewhere, we end up with the claim that the fact that a position is true implies that there is a sufficient reason to accept said position. It turns out that this is essentially the Principle of Sufficient Reason, the crux of many philosophical debates since at least the late 17th century, when Leibniz coined the term. A fascinating property of the principle is that it seems impossible to counter with a counterexample. It’s difficult to identify a case in which the implication is false, i.e., a true position for which there is no sufficient reason. Becoming more certain about the truthfulness of a position would coincide with cataloguing more reasons why it’s true. Conversely, in situations with scarcely any reasons in support of it, a position’s truthfulness is also fragile.
One direction leads to paradox: the impossibility of providing a cogent counterexample to Leibniz’s principle. The other direction is perhaps even more interesting. This involves the claim that the fact that a position is true implies that there is no coherent challenge to it. One might challenge this statement with a view to show its falsehood, as one might challenge other statements to similar ends. However, in this particular case, doing so implicitly involves running a modus tollens argument with the original implication itself as the conditional premise. In other words, by tacitly linking coherent challenges to revealed falsehoods as part of an assumed mechanics of reasoning, one is presupposing the very claim one is attempting to challenge. This “Tolens Trap” is a beautiful instance of what one might call “Cartesian antifragility,” the property of a statement to be reinforced by successive attempts to challenge it. Descartes’ Cogito, due to similar antifragile dynamics, marked his most fundamental understanding of the world.
Zooming out again, we’ve explored both directions, one leading to a Leibnizian paradox which is structurally lacking counterexamples, the other to a Cartesian one whose challenges are self-defeating. However, taken together, these two directions form just one of the many links involved in the operationalization of truthfulness described above. The point was to use this “case study” as a means of illustrating the flavors of “demandingness” that we might need to exercise in our search for durable operationalizations. While still embodying a step towards the concrete, coherent challenges remain securely in the abstract realm. That said, different segments of different operationalizations might involve different levels of concreteness, and so rely on different epistemic tools to identify.
However, as one might use their only wish to wish for unlimited wishes, if we were to successfully operationalize one such notion, it ought to be truthfulness. Doing so would allow us to seek ever more appropriate ways of measuring phenomena of interest, and by extension, understand them. Building on the above operationalization, one might claim that the true nature of truth-seeking lies in the conceptual pressures of self-appointed challengers. Notably, if challenging this recent claim, and so seeking the true nature of truth-seeking—as just one of many phenomena—one would yet again presuppose the original claim. This “Seeker’s Gambit” is yet another example of the “Chinese finger traps” imbued with a totalizing coherence that emerge when inquiring about the very mechanics of inquiry.
Because of the extreme potential of a robust operationalization of generalized truthfulness, it is this that researchers, augmented or not, should prioritize. Engineering a truth-seeking engine to then direct at various phenomena of interest, especially metaphysical (e.g., morality), is halfway to everywhere of philosophical interest.
Instrumentality
Reducing moral uncertainty is instrumental to doing the right thing. After all, it is the process of getting to know what that thing is, and so making it infinitely more likely to be done. The alternative? Going all in on the few pockets of the moral realism roulette whose names we’ve picked up along our civilizational childhood. I wouldn’t take this bet.
And so I take a step back: how could one reduce uncertainty about an immaterial object such as morality? That’s the focus of the first volume of Elements of Computational Philosophy. In brief, the book pursues an operationalization of metaphysical truth-seeking which provably boils down to a particular computable function. Unfortunately, this line of work is nowhere near the point of providing a complete and reflexively robust operationalization of truth. Exciting progress has been made on one-tenth of the puzzle, yes, but there is so much to be done. And so I take a step back: how could one make progress on such an operationalization?