Hypothesis Subspace

What if immoral means are required for moral ends?

In siblings of this node, I've touched on cases where immoral-but-appealing ends might compel Alex to acutely suppress its sense of morality. However, what if there are utopic ends which we actually want which at the same time require morally-ambiguous means to implement? Perhaps limiting one's freedom of movement is harsh, but leads to situation in which one's right to health is better respected.

This is obviously a symptom of an age-old debate between deontology and consequentialism, including their variants blurring the dichotomy. If it turns out that we lack any reliable consequentialist safety mechanism, resorting to deontology-inspired mechanisms (e.g. this one) might be the consequentialist thing to do (because of not being overrun). Still, this line of reasoning hints at issues with the field of technical AGI safety as mostly disconnected from ethics. Alternatively, a deontology could also contain some narrow consequentialist principles.

What if immoral means are required for moral ends?