The point of differentiable cosmogonies is to cause boxed aliens into being and learn from them. Their aliennes to human technology and the physics of our world would help ensure the difficulty of breaking out of the box.
However, there is a concern which plagues this approach, besides the likely insane amounts of compute necessary to run a tiny universe. If they're so alien to us, how could we communicate outside the realm of mathematics? Conversely, if we'd try to make them more related to us so that we could better understand them, they'd have more knowledge at their disposal to break out of the box (e.g. by persuading the observers to do so). There's a fundamental trade-off here, and it seems difficult to get the best of both worlds. Completely sacrificing familiarity would restrict us to gaining insight into mathematics, which is still appealing, but defeats the purpose of trying to run an entire physical universe, rather than just boxing a narrow AI who is good at maths. This way of looking at it points to an iteration of [[differentiable-cosmogonies]] involving "learning from boxed aliens" in a general sense, rather than by simulating a tabula rasa universe. Alien here would mean unfamiliar with the human world.
Could we keep them within legible range while still instilling an aversion to knowledge of humans and of the human world? Such an aversion could be detected and counterproductively bring more attention to the conceptual repellers, considering alien develop computer-like technology to tackle problems which are not cognitively ergonomic for them. Though if knowledge of the human world would be prevented from being physically realizable in the simulation, that might still perhaps patch the issue. But what if issue-patching and concept realizability develops as an unpatched conceptual territory considered in alien science? That might still point at the meta level.