Ideological inference engines are at the same time a generalization and merger of [[memetic-colonies]] and [[deontic-arrays]], connecting the two previous approaches into a shared framework while combining their strengths. In general, this framework relies on expanding an initial knowledge base meant to capture human values using LLMs and promoting courses of action which are compatible with the resulting expansion. Every ideological inference engine has the following ingredients:
IIEs relate to GOFAI inference engines in several intuitive ways. For instance, both roughly rely on expanding a knowledge base using an inference engine and then using the expanded knowledge base to verify a new item. Additionally, they both run into similar issues. How to handle a combinatorial explosion of the knowledge base when relying on forward-chaining in complex domains? How to handle infinite loops if working with backward-chaining (i.e. reasoning backwards)? Solutions identified in GOFAI might help alleviate analogous ones in IIE.
However, there are important conceptual differences between the two. For one, the whole set of clean inference rules discussed in GOFAI settings (e.g. Modus Ponents, Modus Tolens, etc.) are imperfectly handled by LLMs in the current context. Inevitably, the messiness of language as a representation medium (though a behavioral KB might also work), combined with the messiness of human values expressed in said medium make for a fuzzier and more opaque inference engine. The KB only contains true atoms, while LLMs handle all the implicit rules to relate them to new ones, in a somewhat awkward mix-up of the meanings of "model" and "knowledge base" across the two settings. Still, the similarities and structure provided by the framework seem compelling.