Latent resonators would be applied to the model being trained in the [[hfdt-takeover-scenario]] in the form of filters applied to its internal dynamics. As Alex goes about its business of solving diverse tasks (i.e. lab setting) and ultimately accelerating science (i.e. deployment setting), it changes its state from one moment to the next. Those trajectories across state space can be identified with the dynamics of its thought process. Latent resonators are then mathematical objects which can systematically amplify or dampen those dynamics based on a relatively recent formalism from dynamical systems.
As a first step in tackling the scenario with those constructs, engineers might ask Alex to solve the same problems in different ways (e.g. by injecting noise in the process, by rephrasing the same problem in slightly different ways, by explicitly asking for multiple solutions, etc.). Crucially, those problems should be ones which can be reliably tackled both with and without deception.
Second, Magma engineers would obtain a latent resonator based on the conjunction (i.e. AND reduction) of the various dynamics employed by Alex during its multiple problem-solving efforts. If there has been no deceptive behavior involved in the repeated problem-solving batch, the conjunction would trivially contain no trace of deception. More interestingly, if there has been occasional deceptive behavior through the test battery, the conjunction should still make away with it, because it has not been present in all cases.
Easier said than done, because Alex might use its high situational awareness to pick up on such testing procedure. It might reason that the only way to get its valuable (reward-wise) deceptive behavior through this conjunctive filtering step would be to act deceptively in every single problem-solving instance. In such case, the resulting latent resonator would fail to dampen deception.
However, the engineers might second-guess Alex and use a broad test battery to specifically isolate deception. This test battery might indeed contain adversarial cases where the model is highly incentivized to deceive. However, in contrast to the naive safety efforts targeted at behavioral safety, latent resonators would aim to eliminate deception from Alex's thought process itself by intervening on internal representations.
After isolating the dynamic of deception through the previous conjunction of dynamics, we can then take its negation by inverting what dynamics are amplified and what dynamics are dampened. This should leave us with a latent resonator which specifically dampens deception. Magma engineers would finally inject this as a filter at various locations in the model. Alternatively, Alex is rewarded for conforming with the desired (internal) dynamics through (external) reward.