Forecasting forces predictive world models to internalize meaningful representations
A simple way to force a machine learning model to derive a meaningful representation about the world it lives in is by tasking it to predict the short-term future. If a model successfully manages to predict how its environment unfolds in the future, then it plausibly learned a great deal about its inner workings. This can be seen as a self-supervised learning paradigm, together with recovery and matching. Additionally, attempting to predict one’s behavior might lead to a meaningful representation of their behavior.