Hypothesis Subspace

How does the base case scenario for bridger languages sound like?

After people getting used to using synthetic interlingua for gaining insight into the internal representations of ML models, the previous approach of dealing with high-dimensional vectors in forced ways would feel like going back to doing maths in written multiple-paragraph proofs, rather than using compact mathematical notation. That is, the current version would feel infinitely less expressive and brain-friendly than the interlingua. Moreover, the process of engineering latent activations to be more cognitively ergonomic would feel like a natural complement to the process of adapting humans to deal with unwieldy ML models through augmentation. It would help make the model's thoughts more easy to be perceived by humans.

Planning, and ideation in lucrative domains (e.g. scientific research) remains difficult for humans to do, but proves easy for humans to verify, providing humanity deeper oversight abilities to help keep AGI in check. This might mean unraveling plans for human disempowerment, getting our hands on latent knowledge of advanced technologies, etc. In a sense, the model would be transparent from day one, before any transparency tool would be employed to timidly open the black box.

How does the base case scenario for bridger languages sound like?