Hypothesis Subspace

What if moral absolutism is misguided?

Deploying a deontic array assumes a seed charter containing a finite set of normative principles written down by a group of people. In this, the appriach contains moral absolutism as a design choice. This stance refers to the view that there is a specific normative framework which should ideally be projected on the world, rather than there being multiple valid frameworks which conflict with each other. Assuming for the sake of argument that deontic arrays work on a technical level in steering Alex away from taking over, is this what we actually want to project on the world?

Perhaps future ethicists, after a few more millennia of progress, might reach a consensus on the fact that moral absolutism is deeply flawed in a fundamental way. This possibility is concerning, because the very architecture of deontic arrays relies on mostly locking down the seed normative framework, except for automated internal patching.

A lazy reaction to this could be that we should first worry about avoiding the existential risks of the precipice before deliberating on deep questions of morality. We might have all the time in the world to worry about it some time later, and we've got more salient things to do.

However, it might be difficult to impossible to tamper with a post-takeoff AGI in a meaningful way. Much of the appeal of deontic arrays relies on not having humans in the loop in an attempt to reduce the attack surface. I suspect other proposals which include deferring to a past human (e.g. Vanessa Kosoy's fascinating preDCA) to run into similar issues. Perhaps we could have a nested/hierarchical charter with a set of global normative principles combined with varied local ones.

What if moral absolutism is misguided?