In the context of autoregressive models, prompts are often seen as "programs" which specify various behaviors for the model to "run" next. What if we framed prompts as *resonance chambers* of different shapes and sizes which amplify certain dynamics while dampening others? The goal would then be to devise such constructs which reliably make away with misaligned spectral components, and apply them directly to latent activations. One promising way of doing this relies on a new formalism from dynamical systems which defines an entire Boolean logic of dynamics. By taking the conjunction of whitelisted dynamics and negating blacklisted ones, we can craft and inject filters in ML models while only incurring minimal compute overhead. The layer depth of the injection site might also offer a slider between influencing the way a model *perceives* the world and the way it *acts* in given circumstances.

- [[how-do-latent-resonators-relate-to-concrete-challenges-in-alignment]]
- [[how-could-latent-resonators-help-avoid-hfdt-takeover]]
- [[what-interventions-do-latent-resonators-afford]]