Personalized language model perplexity approximates surprisal

If a language model has been trained, fine-tuned, or few-shot learned on the entirety of documents read by a learner, then its perplexity in predicting the contents of a text are a proxy for the surprisal experienced by the learner. Unpredictable passages are both surprising for the learner and perplexing for the language model. The processing fluency is low. In this case, the language model is an approximation of the learner’s own (mental) language model. Teacher-forcing might make this tractable. What’s more, resources could also be evaluated by measuring how much they minimize the perplexity of the model on subsequent samples.

Backlinks