Differentiable cosmogonies avoid the whole clash of human and AI intents over the shape of the world by sequestering the AI to a simulation which contains as little information as possible about the human world. This is meant as an attempt to avoid breakout through newly-detected loopholes. Unfortunately, this drastically reduces the usefulness of AI per unit of compute, because it requires a massive particle-resolution simulation over a massive number of timesteps. A few technical tricks might help shave off a few OOMs of compute (e.g. constrain particles to local interaction), but that doesn't even start unpacking the problem of translating the insights devised by replicators into human terms beyond Rorschach make-believe. Regardless, it aims to sidestep the challenge of intent alignment as a whole.