Probing for Reading Times

arXiv cs.CL Papers

Summary

Researchers probe language model representations to predict human reading times across five languages, finding early layers outperform surprisal for early-pass measures while surprisal remains superior for late-pass measures.

arXiv:2604.18712v1 Announce Type: new Abstract: Probing has shown that language model representations encode rich linguistic information, but it remains unclear whether they also capture cognitive signals about human processing. In this work, we probe language model representations for human reading times. Using regularized linear regression on two eye-tracking corpora spanning five languages (English, Greek, Hebrew, Russian, and Turkish), we compare the representations from every model layer against scalar predictors -- surprisal, information value, and logit-lens surprisal. We find that the representations from early layers outperform surprisal in predicting early-pass measures such as first fixation and gaze duration. The concentration of predictive power in the early layers suggests that human-like processing signatures are captured by low-level structural or lexical representations, pointing to a functional alignment between model depth and the temporal stages of human reading. In contrast, for late-pass measures such as total reading time, scalar surprisal remains superior, despite its being a much more compressed representation. We also observe performance gains when using both surprisal and early-layer representations. Overall, we find that the best-performing predictor varies strongly depending on the language and eye-tracking measure.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/22/26, 08:29 AM

# Probing for Reading Times
Source: [https://arxiv.org/html/2604.18712](https://arxiv.org/html/2604.18712)
###### Abstract

Probing has shown that language model representations encode rich linguistic information, but it remains unclear whether they also capture cognitive signals about human processing\. In this work, we probe language model representations for human reading times\. Using regularized linear regression on two eye\-tracking corpora spanning five languages \(English, Greek, Hebrew, Russian, and Turkish\), we compare the representations from every model layer against scalar predictors—surprisal, information value, and logit\-lens surprisal\. We find that the representations from early layers outperform surprisal in predicting early\-pass measures such as first fixation and gaze duration\. The concentration of predictive power in the early layers suggests that human\-like processing signatures are captured by low\-level structural or lexical representations, pointing to a functional alignment between model depth and the temporal stages of human reading\. In contrast, for late\-pass measures such as total reading time, scalar surprisal remains superior, despite its being a much more compressed representation\. We also observe performance gains when using both surprisal and early\-layer representations\. Overall, we find that the best\-performing predictor varies strongly depending on the language and eye\-tracking measure\.

![[Uncaptioned image]](https://arxiv.org/html/2604.18712v1/github.png)

[https://github\.com/rycolab/llm\-representations\-rt](https://github.com/rycolab/llm-representations-rt)

## 1Introduction

How long a reader’s eyes linger on a linguistic unit is posited to reflect the cognitive effort required to process it\(Just and Carpenter,[1980](https://arxiv.org/html/2604.18712#bib.bib17); Rayner,[1998](https://arxiv.org/html/2604.18712#bib.bib33)\)\. One prominent way to measure these durations is eye\-tracking, which records fixation times at fine temporal resolution\. A key question in psycholinguistics is which textual features best predict these reading times, and the predictive fit of a feature set serves as a measure of its*psychometric power*\(Smith and Levy,[2013](https://arxiv.org/html/2604.18712#bib.bib42)\)\. To date, the most successful neural language model\-based predictor has been surprisal\(Hale,[2001](https://arxiv.org/html/2604.18712#bib.bib15); Levy,[2008](https://arxiv.org/html/2604.18712#bib.bib23); Wilcox et al\.,[2023](https://arxiv.org/html/2604.18712#bib.bib47)\)\.

Independently, a large body of work on*probing*has demonstrated that the internal representations of neural language models encode a wealth of linguistic information, including syntactic structure, morphological features, and semantic properties\(Alain and Bengio,[2017](https://arxiv.org/html/2604.18712#bib.bib1); White et al\.,[2021](https://arxiv.org/html/2604.18712#bib.bib46); Immer et al\.,[2022](https://arxiv.org/html/2604.18712#bib.bib16); Kim et al\.,[2025](https://arxiv.org/html/2604.18712#bib.bib20)\)\. Yet probing studies have overwhelmingly focused on predicting properties of the linguistic signal itself from representations\. While recent work has shown that language model representations align with neural signals measured via fMRI and EEG\(Schrimpf et al\.,[2021](https://arxiv.org/html/2604.18712#bib.bib38); Caucheteux and King,[2022](https://arxiv.org/html/2604.18712#bib.bib6)\), it remains unclear to what extent the language model’s internal representations can directly predict*behavioral*reading times—the fine\-grained, unit\-level processing effort that readers expend, as reflected in eye\-tracking measures\.

![Refer to caption](https://arxiv.org/html/2604.18712v1/x1.png)Figure 1:Gaze duration and its prediction by different mGPT\-derived feature settings\. The excerpt is from a document in the MECO dataset\. The y\-axis represents reading time measured in milliseconds\. True gaze duration is represented by a black line\. The purple line represents the predictions of a linear model trained on 5th\-layer representations and standard surprisal\. Note how the gaze duration and its predictions spike on units with high information content, such aspresidedandconflict\.In this work, we probe language model representations for human reading times\. Using regularized linear regression, we predict unit\-level reading times directly from the neural language models’ representations extracted at every layer of a language model\. We compare these representation\-based predictors against scalar baselines—surprisal, information value\(Giulianelli et al\.,[2024b](https://arxiv.org/html/2604.18712#bib.bib11)\), and logit\-lens surprisal\(nostalgebraist,[2020](https://arxiv.org/html/2604.18712#bib.bib27); Kuribayashi et al\.,[2025](https://arxiv.org/html/2604.18712#bib.bib22)\)—which compress the model’s internal state into a single dimension\. An illustration of this predictive task is provided in[Figure˜1](https://arxiv.org/html/2604.18712#S1.F1), which shows the true aggregated unit\-by\-unit gaze duration of human readers and the gaze duration predicted by various predictor variables\. We conduct our evaluation on two eye\-tracking corpora, Provo\(Luke and Christianson,[2018](https://arxiv.org/html/2604.18712#bib.bib24)\)and MECO\(Siegelman et al\.,[2022](https://arxiv.org/html/2604.18712#bib.bib41)\), spanning five languages: English, Greek, Hebrew, Russian, and Turkish, using mGPT\(Shliazhko et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib40)\), GPT\-2\(Radford et al\.,[2019](https://arxiv.org/html/2604.18712#bib.bib32)\), and cosmosGPT\(Kesgin et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib18)\)\. We evaluate the predictive power of representations from all layers for three reading time metrics: first fixation duration, gaze duration, and total reading time\.

Our results reveal clear differences across reading time modalities\. In English, representations from early layers tend to outperform surprisal in predicting early\-pass measures, such as first fixation duration and gaze duration, suggesting that features relevant to initial lexical access and local structural encoding are accessible in internal states beyond what surprisal captures\. In contrast, for late\-pass measures such as total reading time, scalar predictors, especially surprisal and logit\-lens surprisal, are often competitive with or superior to high\-dimensional representations\. We also observe substantial cross\-lingual variation in the relative predictive power of scalar and representation\-based predictors\. In Greek, Hebrew, Russian, and Turkish, scalar predictors are frequently as strong as or stronger than representations, depending on the eye\-tracking measure\. We further find that combining surprisal with layer\-wise representations frequently improves predictive performance over representations alone, although the gains over scalar baselines are less consistent\. Overall, our findings show that the psychometric power of language models depends strongly on the reading\-time measure, the model layer, and the language under study, rather than being captured by a single predictor across all settings\.

## 2Preliminaries

##### Language Models\.

We adopt the formulation ofKiegeland et al\. \([2026](https://arxiv.org/html/2604.18712#bib.bib19)\), who distinguish the abstract linguisticunitsthat humans process, over which reading times are modeled, andsymbols, which the language model outputs\. Throughout this section, we present surprisal theory and our predictors in terms of units\. We discuss how to reconcile this formulation with a language model defined over tokens in[§˜5\.1](https://arxiv.org/html/2604.18712#S5.SS1)\. LetU\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}be a countable set of units\. Astring𝒖=u1\.\.\.uT\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}=\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{1\}\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{T\}is a finite sequence of unitsut∈U\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\\in\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\. We write𝒖<t=u1​\.\.\.​ut−1\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}=\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{1\}\\mbox\{\\raise 0\.86108pt\\hbox\{$\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.$\}\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\-1\}for the prefix of𝒖\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}up to but not including positiontt\. We denote string concatenation by juxtaposition, i\.e\.,𝒖​𝒖′\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\\prime\}denotes the concatenation of𝒖\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}and𝒖′\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\\prime\}\. WithU∗\{\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}^\{\\ast\}\}\}, we denote the Kleene closure ofU\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}, i\.e\., the set of all finite strings overU\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\. Alanguage modelis a probability distributionp\{\{p\}\}overU∗\{\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}^\{\\ast\}\}\}\. Every language modelp\{\{p\}\}induces aprefix probability, defined as

p→​\(𝒖\)​=def​∑𝒖′∈U∗p​\(𝒖​𝒖′\)\.\{\{\\overrightarrow\{\{\{p\}\}\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}\\sum\_\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\\prime\}\\in\{\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}^\{\\ast\}\}\}\}\{\{p\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\\prime\}\)\.\(1\)Then, define theconditional prefix probabilityas

p→​\(u∣𝒖\)​=def​p→​\(𝒖​u\)p→​\(𝒖\)\.\{\{\\overrightarrow\{\{\{p\}\}\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\\mid\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}\\frac\{\{\{\\overrightarrow\{\{\{p\}\}\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\)\}\{\{\{\\overrightarrow\{\{\{p\}\}\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)\}\.\(2\)By the chain rule of probability, the language model factorizes autoregressively as

p​\(𝒖\)=p→​\(eos∣𝒖\)​∏t=1Tp→​\(ut∣𝒖<t\),\{\{p\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)=\{\{\\overrightarrow\{\{\{p\}\}\}\}\}\(\{\{\\textsc\{eos\}\}\}\\mid\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)\\prod\_\{t=1\}^\{T\}\{\{\\overrightarrow\{\{\{p\}\}\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\\mid\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\),\(3\)whereeosis a distinguished end\-of\-string symbol\. LetU¯​=def​U∪\{eos\}\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}\\overset\{\\text\{def\}\}\{=\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\\cup\\\{\{\{\\textsc\{eos\}\}\}\\\}\.

##### Neural Language Models\.

Modern language models, such as those based on the transformer architecture\(Vaswani et al\.,[2017](https://arxiv.org/html/2604.18712#bib.bib43)\), parameterize the conditional distributions above through a stack ofLLlayers\. The input layer maps each symbolu∈U¯\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\\in\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}to a vector𝐡0​\(u\)∈ℝD\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{0\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\)\\in\{\\mathbb\{R\}\}^\{\{\{D\}\}\}, and each subsequent layer computes a representation as a function of the previous layer’s representations\. Let𝒖=u1​\.\.\.​uT\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}=\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{1\}\\mbox\{\\raise 0\.86108pt\\hbox\{$\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.$\}\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{T\}be a string overU¯\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}, then for each layerℓ∈\{1,\.\.\.,L\}\{\{\\ell\}\}\\in\\\{1,\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.,L\\\}, we define

𝐡ℓ\(𝒖\)=deffℓ\(𝐡ℓ−1\(u1\),\.\.\.,𝐡ℓ−1\(uT\)\),\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\{\\ell\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}f\_\{\{\{\\ell\}\}\}\\left\(\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\{\\ell\}\}\-1\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{1\}\),\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.,\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\{\\ell\}\}\-1\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{T\}\)\\right\),\(4\)where𝐡ℓ​\(𝒖\)\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\{\\ell\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)denotes theℓ\{\{\\ell\}\}\-layer representation at the final unit positionTTandfℓ:\(ℝD\)∗→ℝDf\_\{\{\{\\ell\}\}\}\\colon\(\{\\mathbb\{R\}\}^\{\{\{D\}\}\}\)^\{\*\}\\to\{\\mathbb\{R\}\}^\{\{\{D\}\}\}denotes the transformation at layerℓ\{\{\\ell\}\}\. The last layer representation is then projected ontoΔ​\(U¯\)\\Delta\(\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}\)as

p→\(⋅∣𝒖\)=softmax\(𝐖g\(𝐡L\(𝒖\)\)\+𝐛\),\{\{\\overrightarrow\{\{\{p\}\}\}\}\}\(\\cdot\\mid\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)=\\operatorname\{softmax\}\\left\(\\mathbf\{W\}g\(\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{L\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)\)\+\\mathbf\{b\}\\right\),\(5\)whereg:ℝD→ℝDg\\colon\{\\mathbb\{R\}\}^\{\{\{D\}\}\}\\to\{\\mathbb\{R\}\}^\{\{\{D\}\}\}is a final \(non\-linear\) transformation applied before the linear projection \(e\.g\., layer normalization\),𝐖∈ℝ\(\|U¯\|\)×D\\mathbf\{W\}\\in\{\\mathbb\{R\}\}^\{\(\|\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}\|\)\\times\{\{D\}\}\}is the projection matrix, and𝐛∈ℝ\|U¯\|\\mathbf\{b\}\\in\{\\mathbb\{R\}\}^\{\|\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}\|\}is a bias term\. The final representation𝐡L\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{L\}is directly used to compute the distribution over the next symbols\.111Parameterized LMs typically include a special end\-of\-sequence token, for which representations can be computed and on which next\-token predictions can in principle be conditioned\. From the perspective of an LM as a distribution over strings, however, conditioning oneosis not well defined\. In this work,eosmay only appear as the final unit of a string𝒖\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}, where it is used to model wrap\-up effects \(see[§3](https://arxiv.org/html/2604.18712#S3)\)\.However, the intermediate representations𝐡1\(𝒖\),\.\.\.,𝐡L−1\(𝒖\)\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{1\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\),\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.,\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{L\-1\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)encode linguistic information themselves\(Alain and Bengio,[2017](https://arxiv.org/html/2604.18712#bib.bib1); Immer et al\.,[2022](https://arxiv.org/html/2604.18712#bib.bib16)\)\. In this work, we investigate whether these representations also encode information predictive of human reading behavior\.

## 3Psychometric Data

In this work, we study how well we can predict real\-valued measurements of human processing effort collected during natural reading\. Formally, for a unitut∈U¯\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\\in\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}read in context𝒖<t∈U∗\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\\in\{\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}^\{\\ast\}\}\}, we observe a reading timer​\(ut,𝒖<t\)∈ℝ\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\\in\{\\mathbb\{R\}\}; whenut=eos\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}=\{\{\\textsc\{eos\}\}\}, this corresponds to utterance\-finalwrap\-upcost\(Rayner et al\.,[2000](https://arxiv.org/html/2604.18712#bib.bib36); Meister et al\.,[2022](https://arxiv.org/html/2604.18712#bib.bib25)\), reflecting the additional processing cost associated with integrating the full utterance\. Eye\-tracking experiments yield several such measurements per unit, corresponding to different stages of processing: first\-fixation duration \(the duration of the initial fixation\), gaze duration \(the sum of all fixations before the eyes leave the unit\), and total reading time \(the sum of all fixations including regressions\)\. Our goal is to predict these reading times from features derived from a language model\.

### 3\.1Previously Proposed Predictors

We now discuss three previously proposed*scalar*predictors of human reading time\.

##### Surprisal Theory\.

Surprisal theory\(Hale,[2001](https://arxiv.org/html/2604.18712#bib.bib15); Levy,[2008](https://arxiv.org/html/2604.18712#bib.bib23)\)posits that reading times are an affine function ofsurprisal, the negative log\-probability of a unit under the reader’s implicit language model\. Formally, letpH\{\{\{\{p\}\}\_\{\\mathrm\{H\}\}\}\}denote thehuman language model—the probability distribution that characterizes a reader’s expectations over upcoming linguistic material\. Thesurprisalof unitut\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}in context𝒖<t\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}is then

s​\(ut,𝒖<t\)​=def−log⁡pH​\(ut∣𝒖<t\),\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\\overset\{\\text\{def\}\}\{=\}\-\\log\{\{\{\{p\}\}\_\{\\mathrm\{H\}\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\\mid\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\),\(6\)and the theory predicts that reading time is an affine function of this quantity\(Smith and Levy,[2013](https://arxiv.org/html/2604.18712#bib.bib42); Shain et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib39)\)\. SincepH\{\{\{\{p\}\}\_\{\\mathrm\{H\}\}\}\}is not directly observable, it is standard practice to approximate it with a trained language modelp\{\{p\}\}, and empirical support for the resulting predictions has been found across diverse datasets and languages\(Wilcox et al\.,[2023](https://arxiv.org/html/2604.18712#bib.bib47)\)\. Note that the predictive power ofp\{\{p\}\}\-derived surprisal depends on how wellp\{\{p\}\}approximatespH\{\{\{\{p\}\}\_\{\\mathrm\{H\}\}\}\}, and this approximation quality likely varies across languages and models\.

##### Information Value\.

Shannon surprisal is the standard metric for quantifying the unexpectedness of a linguistic unit under a modelp\{\{p\}\}, but other operationalizations of information content exist; for an overview, seeGiulianelli et al\. \([2024b](https://arxiv.org/html/2604.18712#bib.bib11)\)\. In this paper, we include next\-unit information value in addition to standard surprisal\. Next\-unit information value measures the expected distance between the observed next unitut\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}and alternative continuationsu∈U¯\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\\in\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}sampled from the model’s predictive distribution\. This corresponds to a special case of the general string\-level formulation of information value, where continuations are restricted to a single unit\(cf\. Giulianelli et al\.,[2023](https://arxiv.org/html/2604.18712#bib.bib13),[2024b](https://arxiv.org/html/2604.18712#bib.bib11)\)\. Formally, it is defined as

v​\(ut,𝒖<t\)​=def​𝔼u∼p→\(⋅∣𝒖<t\)\[d​\(ut,u\)\],\\displaystyle\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\\overset\{\\text\{def\}\}\{=\}\\operatorname\*\{\\mathbb\{E\}\}\_\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\\sim\{\{\\overrightarrow\{\{\{p\}\}\}\}\}\(\\cdot\\mid\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\}\\left\[\{\{\\mathrm\{d\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\)\\right\],\(7\)whered:U¯×U¯→ℝ≥0\{\{\\mathrm\{d\}\}\}\\colon\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}\\times\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}\\to\{\\mathbb\{R\}\}\_\{\\geq 0\}is a distance function, typically operationalized as the cosine distance between the contextual representations𝐡ℓ​\(𝒖<t​ut\)\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\{\\ell\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\)and𝐡ℓ​\(𝒖<t​u\)\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\{\\ell\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\)at a given layerℓ\{\{\\ell\}\}\(Giulianelli et al\.,[2024b](https://arxiv.org/html/2604.18712#bib.bib11),[2026](https://arxiv.org/html/2604.18712#bib.bib12)\)\. This makes information value a natural point of comparison for our representation\-based predictors\.

##### Logit Lens\.

Standard surprisal is computed from the final layer’s next\-token distribution\. Thelogit lens\(nostalgebraist,[2020](https://arxiv.org/html/2604.18712#bib.bib27); Kuribayashi et al\.,[2025](https://arxiv.org/html/2604.18712#bib.bib22)\)asks what distribution an intermediate layer would induce if its representation were fed directly to the output head\. Concretely, it applies the*same*projection matrix𝐖\\mathbf\{W\}, bias𝐛\\mathbf\{b\}, and layer normalizationLN\\mathrm\{LN\}that are used after the final layer to the representation of an earlier layerℓ\{\{\\ell\}\}:

qℓ\(⋅∣𝒖\)=defsoftmax\(𝐖𝐡ℓ\(𝒖\)\+𝐛\)\.\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}q\}\_\{\{\{\\ell\}\}\}\(\\cdot\\mid\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}\\operatorname\{softmax\}\\left\(\\mathbf\{W\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\{\\ell\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\)\+\\mathbf\{b\}\\right\)\.\(8\)Because𝐖\\mathbf\{W\}and𝐛\\mathbf\{b\}are estimated only to decode the final layer’s representation, there is no guarantee that this projection yields a meaningful distribution at earlier layers—the intermediate representations may not be linearly decodable in vocabulary space\. In practice, however, the logit lens has been found to produce interpretable predictions at many layers\(nostalgebraist,[2020](https://arxiv.org/html/2604.18712#bib.bib27)\)\. We define thelogit\-lens surprisalsll\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}at layerℓ\{\{\\ell\}\}as

sℓll​\(ut,𝒖<t\)​=def−log⁡qℓ​\(ut∣𝒖<t\)\.\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\_\{\{\{\\ell\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\\overset\{\\text\{def\}\}\{=\}\-\\log\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}q\}\_\{\{\{\\ell\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\\mid\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\.\(9\)

### 3\.2The Limitations of Scalar Predictors

All three predictors introduced above—surprisal, information value, and logit\-lens surprisal—share a fundamental limitation: each compresses a representation extracted from a language model into a single scalar\. While such scalar predictors have served as useful proxies for human processing effort, it is natural to suspect that using the entire representation as a predictor may be more useful\. Moreover, in the case of surprisal, larger models that achieve lower cross\-entropy on held\-out text often evince poorer fit to human reading times\(Oh and Schuler,[2023](https://arxiv.org/html/2604.18712#bib.bib28); Kuribayashi et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib21)\), and recent fine\-grained modeling suggests that much of the variance typically attributed to surprisal may instead be explained by skip rates\(Re et al\.,[2025](https://arxiv.org/html/2604.18712#bib.bib37)\)\. Finally, recent evidence also indicates that the internal layers of language models contain representations that align more closely with human behavioral and neural signals than any single scalar derived from them\(Schrimpf et al\.,[2021](https://arxiv.org/html/2604.18712#bib.bib38); Caucheteux and King,[2022](https://arxiv.org/html/2604.18712#bib.bib6); Kuribayashi et al\.,[2025](https://arxiv.org/html/2604.18712#bib.bib22)\)\. Taken together, this suggests that scalar compression—whether through surprisal \(which reduces the final layer to a log\-probability\), information value \(which summarizes representational distance as a single expectation\), or logit\-lens surprisal \(which projects an intermediate layer through the output head\)—discards much of the psychometrically relevant information contained in the model’s internal representations\.

## 4Methods

To evaluate whether the representations induced by neural language models serve as useful predictors of human processing effort, we apply various forms of regularized linear regression\. By controlling for standard psycholinguistic factors, we compare the predictive power of representations, information value, standard surprisal, and layer\-wise surprisal\.

### 4\.1Linear Regression

To predict reading times, we follow standard psycholinguistic practices and use linear models\(Goodkind and Bicknell,[2018](https://arxiv.org/html/2604.18712#bib.bib14); Wilcox et al\.,[2020](https://arxiv.org/html/2604.18712#bib.bib48)\)\. Formally, letr​\(ut,𝒖<t\)∈ℝ\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\\in\{\\mathbb\{R\}\}be a real\-valued reading time measurement for a unitut∈U¯\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\\in\\overline\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}\}in context𝒖<t∈U∗\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\\in\{\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}^\{\\ast\}\}\}, and let𝐱​\(ut,𝒖<t\)∈ℝD\{\{\\mathbf\{x\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\\in\{\\mathbb\{R\}\}^\{D\}be a column vector of predictor variables\.333All vectors in this paper are column vectors\.We predict reading times as

r𝜷^​\(ut,𝒖<t\)​=def​𝐱​\(ut,𝒖<t\)⊤​𝜷,\{\{\\widehat\{\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\_\{\{\{\{\{\\bm\{\\beta\}\}\}\}\}\}\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)\\overset\{\\text\{def\}\}\{=\}\{\{\\mathbf\{x\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\)^\{\\top\}\{\{\{\{\\bm\{\\beta\}\}\}\}\},\(10\)where𝜷∈ℝD\{\{\{\{\\bm\{\\beta\}\}\}\}\}\\in\{\\mathbb\{R\}\}^\{D\}is a parameter vector\. Let the corpus consist ofNNstrings𝒖\(1\),\.\.\.,𝒖\(N\)\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\(1\)\},\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.,\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\(N\)\}, where string𝒖\(n\)\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\(n\)\}hasT\(n\)T^\{\(n\)\}units pluseos\. We estimate𝜷\{\{\{\{\\bm\{\\beta\}\}\}\}\}by minimizing the per\-string squared loss:

Ln​\(𝜷\)​=def​∑t=1T\(n\)\(r​\(ut\(n\),𝒖<t\(n\)\)−r𝜷^​\(ut\(n\),𝒖<t\(n\)\)\)2\+\(r​\(eos,𝒖\(n\)\)−r𝜷^​\(eos,𝒖\(n\)\)\)2\.L\_\{n\}\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}\\sum\_\{t=1\}^\{T^\{\(n\)\}\}\\bigl\(\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}^\{\(n\)\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}^\{\(n\)\}\)\-\{\{\\widehat\{\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\_\{\{\{\{\{\\bm\{\\beta\}\}\}\}\}\}\}\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}^\{\(n\)\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}^\{\(n\)\}\)\\bigr\)^\{2\}\\\\ \+\\bigl\(\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\(\{\{\\textsc\{eos\}\}\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\(n\)\}\)\-\{\{\\widehat\{\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\_\{\{\{\{\{\\bm\{\\beta\}\}\}\}\}\}\}\}\}\(\{\{\\textsc\{eos\}\}\},\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}^\{\(n\)\}\)\\bigr\)^\{2\}\.\(11\)Ordinary least squares estimates𝜷\{\{\{\{\\bm\{\\beta\}\}\}\}\}by minimizingL​\(𝜷\)​=def​∑n=1NLn​\(𝜷\)L\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}\\sum\_\{n=1\}^\{N\}L\_\{n\}\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\. FollowingWilcox et al\. \([2023](https://arxiv.org/html/2604.18712#bib.bib47)\)andOpedal et al\. \([2024](https://arxiv.org/html/2604.18712#bib.bib30)\), we do not apply any transformation \(e\.g\., log orzz\-score\) to the reading times before fitting the model, so thatr𝜷^\{\{\\widehat\{\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\_\{\{\{\{\{\\bm\{\\beta\}\}\}\}\}\}\}\}\}is directly interpretable in milliseconds\.

##### Regularized Linear Regression\.

We also consider regularized variants\. Ridge regression adds an∥⋅∥22\\lVert\\cdot\\rVert\_\{2\}^\{2\}penalty:

LR​\(𝜷\)​=def​L​\(𝜷\)\+λ​∥𝜷∥22,L\_\{\\mathrm\{R\}\}\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}L\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\+\{\{\\lambda\}\}\\lVert\{\{\{\{\\bm\{\\beta\}\}\}\}\}\\rVert\_\{2\}^\{2\},\(12\)whereλ≥0\{\{\\lambda\}\}\\geq 0controls the strength of regularization\. LASSO regression instead uses an∥⋅∥1\\lVert\\cdot\\rVert\_\{1\}penalty:

LL​\(𝜷\)​=def​L​\(𝜷\)\+λ​∥𝜷∥1\.L\_\{\\mathrm\{L\}\}\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}L\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\+\{\{\\lambda\}\}\\lVert\{\{\{\{\\bm\{\\beta\}\}\}\}\}\\rVert\_\{1\}\.\(13\)In contrast to ridge regression, LASSO encourages sparsity in𝜷\{\{\{\{\\bm\{\\beta\}\}\}\}\}, inducing sparse solutions and acting as a form of feature selection\.

##### Tuning\.

We tune the regression models by selecting \(i\) whether to apply regularization and, if so, \(ii\) whether to use LASSO or ridge regression, along with \(iii\) the corresponding penalty weight\. Model selection is performed using the testmean squared error\(MSE\) on a fixed train–test split:

MSE​\(𝜷\)​=def​L​\(𝜷\)∑n=1N\(T\(n\)\+1\)\.\\mathrm\{MSE\}\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\\overset\{\\text\{def\}\}\{=\}\\frac\{L\(\{\{\{\{\\bm\{\\beta\}\}\}\}\}\)\}\{\\sum\_\{n=1\}^\{N\}\(T^\{\(n\)\}\+1\)\}\.\(14\)To avoid leakage, the documents used in this tuning test split \(5 documents in Provo and 2 documents in MECO\) are excluded from all subsequent experiments\. We evaluate penalty weights in the range\[0\.001,10\]\[0\.001,10\], performing hyperparameter selection independently for each predictor type, layer, and dependent variable\. This procedure is applied to the baseline and surprisal models, and to each layer\-wise instance \(layers 1–24 for mGPT and 1–12 for GPT\-2 and cosmosGPT\) of information value, logit lens, and representation predictors\.

##### Cross\-Validation\.

We evaluate each combination of predictor type and reading time measure using 10\-fold cross\-validation, run separately on Provo and on each language subset of MECO\.

## 5Experimental Setup

### 5\.1Feature Estimation

##### Models\.

We use surprisal estimates from mGPT\(Shliazhko et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib40)\), a multilingual model based on the GPT\-3\(Brown et al\.,[2020](https://arxiv.org/html/2604.18712#bib.bib4)\)architecture\. mGPT was trained on 61 languages from 25 language families, which enables us to experiment on both Provo\(Luke and Christianson,[2018](https://arxiv.org/html/2604.18712#bib.bib24)\)and the MECO\(Siegelman et al\.,[2022](https://arxiv.org/html/2604.18712#bib.bib41)\)data\. It has 24 layers, each with an embedding dimension of 2048\. For additional experiments, we use two monolingual models: the English monolingual GPT\-2 Small\(Radford et al\.,[2019](https://arxiv.org/html/2604.18712#bib.bib32)\)on the Provo data and the English subset of MECO; and the Turkish cosmosGPT\(Kesgin et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib18)\)on the Turkish subset of MECO\. Both monolingual models have 12 layers with an embedding dimension of 768\.

##### From Tokens to Units\.

LetpΣ\{\{\{\{p\}\}\_\{\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\Sigma\}\}\}\}\}\}denote the token\-level444SeeGastaldi et al\. \([2025](https://arxiv.org/html/2604.18712#bib.bib9)\)andVieira et al\. \([2025](https://arxiv.org/html/2604.18712#bib.bib44)\)for a formal treatment of tokenized and token\-level language models\.language model: a probability distribution over token stringsΣ∗\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\Sigma\}\}\}^\{\\ast\}, whereΣ\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\Sigma\}\}\}is a finite token alphabet, andφ:U∗→Σ∗\{\{\\varphi\}\}\\colon\{\{\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}U\}\}\}^\{\\ast\}\}\}\\to\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\Sigma\}\}\}^\{\\ast\}is a function that maps a unit string to a token string\. We assume thatφ\{\{\\varphi\}\}respects unit boundaries: no token spans two units, so any tokenization decomposes asφ​\(u1​\.\.\.​uT\)=𝝈1​\.\.\.​𝝈T\{\{\\varphi\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{1\}\\mbox\{\\raise 0\.86108pt\\hbox\{$\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.$\}\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{T\}\)=\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\bm\{\\sigma\}\}\}\}\_\{1\}\\mbox\{\\raise 0\.86108pt\\hbox\{$\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.$\}\}\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\bm\{\\sigma\}\}\}\}\_\{T\}, where𝝈t=σt,1​\.\.\.​σt,nt\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\bm\{\\sigma\}\}\}\}\_\{t\}=\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\sigma\}\}\}\_\{t,1\}\\mbox\{\\raise 0\.86108pt\\hbox\{$\.\\mkern\-1\.0mu\.\\mkern\-1\.0mu\.$\}\}\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\sigma\}\}\}\_\{t,n\_\{t\}\}is the token sequence corresponding to unitut\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\.555For the whitespace\-delimited words used in our corpora and the token alphabets of mGPT, GPT\-2, and cosmosGPT, this holds in practice: each token sits within a single word\.Following standard practice\(Wilcox et al\.,[2023](https://arxiv.org/html/2604.18712#bib.bib47)\), we define unit\-level surprisal and logit\-lens surprisal as the sum over𝝈t\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\bm\{\\sigma\}\}\}\}\_\{t\}of the per\-token surprisals underpΣ\{\{\{\{p\}\}\_\{\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\Sigma\}\}\}\}\}\}andqℓ\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}q\}\_\{\{\\ell\}\}\([Eq\.˜8](https://arxiv.org/html/2604.18712#S3.E8)\), respectively\.666SeeGiulianelli et al\. \([2024a](https://arxiv.org/html/2604.18712#bib.bib10)\),Pimentel and Meister \([2024](https://arxiv.org/html/2604.18712#bib.bib31)\),Oh and Schuler \([2024](https://arxiv.org/html/2604.18712#bib.bib29)\), andKiegeland et al\. \([2026](https://arxiv.org/html/2604.18712#bib.bib19)\)for discussion of this choice and alternative approaches to calculating unit\-level surprisal\.UnlikeKuribayashi et al\. \([2025](https://arxiv.org/html/2604.18712#bib.bib22)\), we do not include tuned lens\(Belrose et al\.,[2025](https://arxiv.org/html/2604.18712#bib.bib3)\), since mGPT does not have a pre\-trained tuned\-lens model; we include logit\-lens surprisal of the last layer, as it may differ from standard surprisal\.777The[Hugging Face documentation](https://huggingface.co/docs/transformers/en/main_classes/output)states that some models do in fact apply a functionggor further processing to the last state when it is returned; this could affect surprisal and render it different from the corresponding final layer logit lens\.

Writing𝐡ℓ​\(𝝈\)\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\\ell\}\}\(\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\bm\{\\sigma\}\}\}\}\)for the layer\-ℓ\{\{\\ell\}\}hidden state ofpΣ\{\{\{\{p\}\}\_\{\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\Sigma\}\}\}\}\}\}at the final token of a token string𝝈∈Σ∗\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\bm\{\\sigma\}\}\}\}\\in\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\Sigma\}\}\}^\{\\ast\}, we define the unit\-level representation of𝒖<t​ut\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}as the mean over thentn\_\{t\}tokens that correspond tout\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}:

𝐡ℓ​\(𝒖<t​ut\)​=def​nt−1​∑k=M−nt\+1M𝐡ℓ​\(φ​\(𝒖<t​ut\)≤k\)\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\\ell\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\)\\overset\{\\text\{def\}\}\{=\}n\_\{t\}^\{\-1\}\\\!\\\!\\\!\\\!\\sum\_\{k=M\-n\_\{t\}\+1\}^\{M\}\\\!\\\!\\\!\\\!\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\_\{\{\\ell\}\}\(\{\{\\varphi\}\}\(\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}\\bm\{u\}\}\}\}\_\{<t\}\{\{\{\\color\[rgb\]\{0\.3828125,0\.44921875,0\.07421875\}u\}\}\}\_\{t\}\)\_\{\\leq k\}\)\(15\)Mean\-pooling is one of several possible aggregations; we discuss this in the limitations section\. For information value,[Eq\.˜7](https://arxiv.org/html/2604.18712#S3.E7)then applies withd\{\{\\mathrm\{d\}\}\}computed as the cosine distance between these pooled representations, and we approximate the expectation by Monte Carlo withk=50\{\{k\}\}=50continuations sampled frompΣ\{\{\{\{p\}\}\_\{\{\{\{\\color\[rgb\]\{0\.55078125,0\.0390625,0\.34765625\}\\Sigma\}\}\}\}\}\}\.888For each continuation, we generate tokens until an end\-of\-unit marker is encountered, or 3 tokens have been produced\.

![Refer to caption](https://arxiv.org/html/2604.18712v1/x2.png)Figure 2:MSE for baseline, surprisal, representations, information value, and logit\-lens surprisal on the Provo and MECO data across the 24 layers of mGPT and eye\-tracking measures\.

### 5\.2Data

We use reading time data from two commonly used corpora in psycholinguistics\. The Provo corpus\(Luke and Christianson,[2018](https://arxiv.org/html/2604.18712#bib.bib24)\)is a dataset of eye\-movement behavior comprising eye\-tracking recordings from 84 native English readers as they read 55 short English passages drawn from a range of fiction and nonfiction sources\. The Multilingual Eye Movement Corpus\(MECO; Siegelman et al\.,[2022](https://arxiv.org/html/2604.18712#bib.bib41)\)is a large multilingual corpus with eye movement data from L1 speakers reading 12 Wikipedia\-style passages in 13 languages\. To capture a variety of different language families, we select the English, Greek, Hebrew, Russian, and Turkish portions of the dataset for our experiments\.

##### Data Preprocessing\.

We quantify the unit\-by\-unit reading time, using three standard eye\-tracking measures:first fixation duration, the duration of the first fixation on a unit during first pass reading;gaze duration, the sum of all consecutive fixations on the unit from first entry, until a fixation leaves it for the first time; andtotal reading time, the sum of all fixations on the unit across the entire trial, including any later re\-reading due to regressions\. In line with established psycholinguistic theory, data collected during eye\-tracking experiments can be divided into early\-pass and late\-pass measures\(Rayner and Fischer,[1996](https://arxiv.org/html/2604.18712#bib.bib35)\)\. First fixation and gaze duration are considered early\-pass measures, as they are primarily sensitive to the initial stages of unit recognition and lexical access\(Rayner,[1998](https://arxiv.org/html/2604.18712#bib.bib33); Cook and Wei,[2019](https://arxiv.org/html/2604.18712#bib.bib8)\)\. Specifically, first fixation duration is viewed as a marker of initial orthographic and phonetic activation\(Rayner,[2009](https://arxiv.org/html/2604.18712#bib.bib34)\)\. Conversely, total reading time is a late\-pass measure, which is interpreted as a marker of higher\-level post\-lexical processing, reflecting the cognitive effort required for syntactic integration, discourse\-level comprehension, and the resolution of processing difficulties\(Clifton et al\.,[2007](https://arxiv.org/html/2604.18712#bib.bib7)\)\.

## 6Results

We now present results with surprisal, representations, information value, and logit lens computed using mGPT\. For results using GPT\-2 Small and cosmosGPT, see[App\.˜C](https://arxiv.org/html/2604.18712#A3)\.

### 6\.1Predictive Power of Representations

[Figure˜2](https://arxiv.org/html/2604.18712#S5.F2)and[Table˜1](https://arxiv.org/html/2604.18712#S6.T1)compare predictive power across layers and eye\-tracking measures for Provo\. Performance is highest for early\-layer representations \(1–10\), declines in intermediate layers, and recovers at the final layer\. The best representation layer is comparable to or stronger than surprisal, though surprisal sometimes wins despite its much lower dimensionality\. Combining representations with surprisal improves over either alone; these gains tend to be significant over representations but not over scalars \([App\.˜B](https://arxiv.org/html/2604.18712#A2)\)\.

Table 1:ΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}\(baseline–target\) of ten\-fold cross\-validation for models trained on baseline features and mGPT\-derived surprisal, representations \(𝐡\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\), information value \(v\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\), and logit\-lens surprisal \(sll\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\) on the Provo and MECO data across the 24 layers of mGPT and eye\-tracking measures\. For each measure, we report the lowest MSE over layers and the corresponding layer indexℓ\{\{\\ell\}\}\. Bold indicates the best condition per row\. Asterisks \(\*\) denote models that significantly outperform the respective models trained on permuted reading times, according to a one\-sided pairedtt\-test \(α=0\.001\{\{\\alpha\}\}=0\.001\)\. Similarly, bullets \(∙\\bullet\) indicate significance over the baseline\.
### 6\.2Information Value and Logit Lens

[Figure˜2](https://arxiv.org/html/2604.18712#S5.F2)and[Table˜1](https://arxiv.org/html/2604.18712#S6.T1)compare the predictive power of information value and logit\-lens surprisal against model representations and surprisal\. Overall, these predictors show less variability across layers compared to model representations\. For first fixation duration and total reading time, we find that information value is more predictive when computed from the early to intermediate layers than from the later ones\. In contrast, logit\-lens surprisal tends to perform best in the last layers\.

### 6\.3Predictive Power across Languages

To assess the predictive power of representations across languages, we repeat the experiments for five languages from the MECO dataset: English, Greek, Hebrew, Russian, and Turkish\. Consistent with the results for Provo,[Figures˜2](https://arxiv.org/html/2604.18712#S5.F2)and[1](https://arxiv.org/html/2604.18712#S6.T1)show that representations from early layers are the most predictive, with performance decreasing in intermediate layers before recovering in the final layer\. An exception to this is gaze duration and total reading time for Greek, as well as total reading time for Turkish, where the representations from the last layer perform best\. Similarly, we find that information value is most predictive at early and intermediate layers, while logit\-lens surprisal has the highest predictive power at later layers\. Moreover, we observe that Russian and Turkish are the only languages where the combination of representation and surprisal \([Table˜2](https://arxiv.org/html/2604.18712#A2.T2)\) is not the best predictor for first\-fixation duration and gaze duration\. Overall, we find that the most predictive predictor varies strongly depending on the language and eye\-tracking measure\.

### 6\.4Permuting Reading Times

To further test whether the observed predictive performance reflects a meaningful relationship between the predictors derived from language models and the reading time data, we conduct a permutation test on the dependent variable\. Specifically, we refit the same models from[§˜6\.1](https://arxiv.org/html/2604.18712#S6.SS1), on training sets with their reading times randomly permuted across words, but without permuting the reading times of the respective validation sets\. There are similarities to non\-permuted trends across layers for representations; however, mean squared error scores are higher across the board, and differences between the various sets of predictors tend to be smaller \([App\.˜E](https://arxiv.org/html/2604.18712#A5)\)\.

### 6\.5Linear Mixed\-Effects Models

We extend[Eq\.˜10](https://arxiv.org/html/2604.18712#S4.E10)to account for subject\- and document\-level variability by fitting linear mixed\-effects models \(LMMs\) with random intercepts for participants and documents on the MECO data; for high\-dimensional representations, we first reduce the representations to 25 principal components\. We selected the number of principal components using the scree\-plot elbow criterion\(Cattell,[1966](https://arxiv.org/html/2604.18712#bib.bib5)\)\. The results \([Table˜9](https://arxiv.org/html/2604.18712#A6.T9)\) are broadly consistent with our main analyses, and significance over permuted reading times is preserved\. At the same threshold\(α=0\.001\)\(\{\{\\alpha\}\}=0\.001\), however, fewer predictors significantly outperform the baseline\. This may follow from LMMs yielding more conservative effect estimates: by explicitly modeling reader and document variability, they isolate the predictor’s contribution net of these sources, which simpler models can conflate into the effect itself\.

## 7Discussion

We now turn to discussing our results and their main implications\. Across predictors \(representations, information value, and logit\-lens surprisal\), we find substantial differences in their predictive power when extracted from different model layers\.

##### Opposite Trends for Logit Lens\.

Kuribayashi et al\. \([2025](https://arxiv.org/html/2604.18712#bib.bib22)\)report that logit\-lens surprisal performs better at earlier layers for larger models and at later layers for smaller ones\. We corroborate this for mGPT \(1\.3B\) and GPT\-2 \(117M\), where logit\-lens surprisal peaks at later layers\. However, representations show the opposite pattern—higher predictive power at earlier layers—indicating that psychometric predictive power depends on the choice of predictor, not just model size\. This contrast should be interpreted with care: representations are evaluated via linear regression on raw vectors, whereas logit\-lens predictors undergo additional transformations \(layer normalization, projection into vocabulary space\), so the difference may partly reflect accessibility to a linear decoder rather than representational content alone\. The superiority of early\-layer representations for early\-pass measures is consistent with the view that initial reading stages rely on lexical and morpho\-syntactic features preserved in earlier layers\.999More broadly, this pattern suggests a functional alignment between the layer hierarchy of the transformer and the temporal stages of human reading—early layers correspond to early\-pass measures, while later layers correspond to late\-pass measures—reminiscent of the depth–time correspondence observed in neuroscience\(Caucheteux and King,[2022](https://arxiv.org/html/2604.18712#bib.bib6)\)\.

##### Cross\-Lingual Differences\.

On the MECO dataset, the performance of the different settings on first fixation duration is generally consistent between languages, as well as with the Provo data\. However, we do not observe the same consistency across languages for gaze duration or total reading time, despite the fact that the non\-baseline predictors are all derived from mGPT\.101010Shliazhko et al\. \([2024](https://arxiv.org/html/2604.18712#bib.bib40)\)report worse downstream performance for mGPT on Greek and Turkish, whileArnett and Bergen \([2025](https://arxiv.org/html/2604.18712#bib.bib2)\)find that differences in downstream performance of language models seem to be caused by data quantity disparities, rather than model architecture\.In fact,Kuribayashi et al\. \([2025](https://arxiv.org/html/2604.18712#bib.bib22)\)also observed mixed results in multilingual settings, possibly due to latent effects of English on the processing of the target language\(Wendler et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib45)\)\. We conducted experiments with a monolingual Turkish model to investigate this possibility \([App\.˜C](https://arxiv.org/html/2604.18712#A3)\); yet, we saw no discernible difference to the mGPT results for Turkish\. In a qualitative example of a particular clause in different languages \([App\.˜D](https://arxiv.org/html/2604.18712#A4)\), we observed some intra\-lingual patterns that seem to modulate reading times: Greek seems more verbose and more likely to begin a structural unit with lower information content, because it is a language that favors the use of articles \(e\.g\., even ahead of proper names\)\. Turkish word order places verbs at the end of clauses\. However, it is unclear if there is a connection between these patterns and the differences in the performance of representations on reading times\.

##### Variance of Predictors across Layers\.

Finally, the validation MSE of both information value and logit\-lens surprisal varies less across layers than representation\-based predictors\. One plausible reason is that they compress each layer’s state into a single scalar, whereas representations expose a high\-dimensional feature whose usefulness can change more sharply with layer depth\.

##### Future Work\.

While this work provides a controlled comparison between several language\-model\-derived representations across layers and reading time measures, it leaves possible extensions for analyzing internal representations using dimensionality reduction or feature selection techniques\. In particular, kernel principal component analysis could be used to map representations into a non\-linear feature space prior to probing, allowing us to assess whether reading\-time\-relevant structure is present but not linearly accessible in the original representations\. Comparing layer\-wise performance before and after such transformations would help disentangle representational content from linear decodability and clarify whether some of the observed trends are due to differences in how easily the information can be recovered by the probe, rather than differences in the information itself\. Moreover, our experiments are limited to mGPT, GPT\-2, and cosmosGPT, and thus could be extended to other monolingual and/or larger language models\. Finally, we believe that the diverging performance of representations and logit\-lens surprisal across layers offers interesting avenues to test with combinations of different predictors\.

## 8Conclusion

In this work, we investigated the psychometric power of language models by revisiting the hypothesis that reading times are best predicted by the scalar measure of surprisal by testing whether a neural language model’s internal representations serve as more accurate predictors of human processing effort\. While controlling for standard psycholinguistic factors, we compared the predictive power of language model representations, information value, and layer\-wise surprisal\. Across two eye\-tracking corpora and five typologically distinct languages, we identify differences across reading time modalities: early\-layer representations of mGPT and GPT\-2 are superior at predicting early\-pass measures \(first fixation and gaze duration\), while scalar surprisal remains superior for late\-pass measures \(total reading time\)\. These results suggest that some psychometric power of language models is encoded within their internal representations beyond what surprisal captures\. Notably, language model representations show more variability across layers compared to information value and logit\-lens surprisal\. Finally, we find that the most effective predictor varies across the languages and eye\-tracking measures analyzed\.

## Limitations

Our study is subject to several limitations\. First, our analysis is restricted to eye\-tracking data\. While these measures provide high\-fidelity temporal markers of reading effort, future work is needed to determine if the observed patterns generalize to other modalities, such as Self\-Paced Reading or neuroimaging \(EEG/fMRI\)\. Second, computational constraints limited our evaluation to models up to 1\.3B parameters \(mGPT\)\. It remains unclear whether the observed functional divergence between representations and surprisal persists or evolves in larger models with higher dimensionality\. While prior work\(Oh and Schuler,[2023](https://arxiv.org/html/2604.18712#bib.bib28); Shain et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib39); Kuribayashi et al\.,[2024](https://arxiv.org/html/2604.18712#bib.bib21)\)found that surprisal from smaller models often fits reading times better than that of larger language models, this scaling behavior may not generalize to a neural language model’s internal representations\. Next, we acknowledge that probing experiments can lead to false discoveries if random noise in representations is not properly accounted for\(Méloux et al\.,[2025](https://arxiv.org/html/2604.18712#bib.bib26)\)\. While comparing our results against randomly initialized mGPT, GPT\-2, and cosmosGPT baselines was not computationally feasible, we mitigate this concern through our permutation testing, which controls for random associations between predictors and reading times\. Furthermore, we observe consistent trends across various experimental setups that random noise would not reproduce\. However, we note that future research using randomly initialized model representations could examine our findings\. In addition, our evaluation relies on raw MSE, which is scale\-dependent: although this allows meaningful comparisons among predictors within the same eye\-tracking measure, the magnitude of MSE differences should not be compared directly across first fixation duration, gaze duration, and total reading time, since these measures lie on different numerical scales\. Finally, we use a language model’s raw internal representations without experimenting with dimensionality reduction methods, except in the case of mixed\-effects models \([§˜6\.5](https://arxiv.org/html/2604.18712#S6.SS5)\)\. Exploring lower\-rank subspaces \(e\.g\., via principal component analysis\) could further isolate the specific features within the internal states that are accountable for the alignment with human processing effort\. Relatedly, for multi\-token units, we aggregate hidden states by mean\-pooling; alternative aggregations \(e\.g\., max\-pool, first\- or last\-token pooling, or concatenation\) may yield different results, and a systematic comparison is left to future work\.

## Ethics Statement

We foresee no ethical problems with our work\.

## Acknowledgments

We would like to thank Alex Warstadt for helpful discussions, and Taiga Someya and Andreas Opedal for pointing us toKuribayashi et al\. \([2025](https://arxiv.org/html/2604.18712#bib.bib22)\)\. We also thank the anonymous reviewers for their useful comments, suggestions, and references to related work\. Eleftheria Tsipidi was supported by the SNSF grant number 204667\. Karolina Stańczak was supported by the ETH AI Center postdoctoral fellowship\. We disclose the use of generative AI tools for light editing and rephrasing; the original text was our own, and we carefully reviewed all suggested edits\.

## References

- Alain and Bengio \(2017\)Guillaume Alain and Yoshua Bengio\. 2017\.[Understanding intermediate layers using linear classifier probes](https://arxiv.org/abs/1610.01644)\.In*International Conference on Learning Representations*\.
- Arnett and Bergen \(2025\)Catherine Arnett and Benjamin Bergen\. 2025\.[Why do language models perform worse for morphologically complex languages?](https://aclanthology.org/2025.coling-main.441/)In*Proceedings of the International Conference on Computational Linguistics*\.
- Belrose et al\. \(2025\)Nora Belrose, Igor Ostrovsky, Lev McKinney, Zach Furman, Logan Smith, Danny Halawi, Stella Biderman, and Jacob Steinhardt\. 2025\.[Eliciting latent predictions from transformers with the tuned lens](https://arxiv.org/abs/2303.08112)\.
- Brown et al\. \(2020\)Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, Sandhini Agarwal, Ariel Herbert\-Voss, Gretchen Krueger, Tom Henighan, Rewon Child, Aditya Ramesh, Daniel Ziegler, Jeffrey Wu, Clemens Winter, and 12 others\. 2020\.[Language models are few\-shot learners](https://proceedings.neurips.cc/paper_files/paper/2020/file/1457c0d6bfcb4967418bfb8ac142f64a-Paper.pdf)\.In*Advances in Neural Information Processing Systems*, volume 33\.
- Cattell \(1966\)Raymond B\. Cattell\. 1966\.[The scree test for the number of factors](https://doi.org/10.1207/s15327906mbr0102_10)\.*Multivariate Behavioral Research*, 1\(2\):245–276\.PMID: 26828106\.
- Caucheteux and King \(2022\)Charlotte Caucheteux and Jean\-Rémi King\. 2022\.[Brains and algorithms partially converge in natural language processing](https://doi.org/10.1038/s42003-022-03036-1)\.*Communications Biology*, 5\(1\)\.
- Clifton et al\. \(2007\)Charles Clifton, Adrian Staub, and Keith Rayner\. 2007\.[Eye movements in reading words and sentences](https://www.sciencedirect.com/science/article/pii/B9780080449807500173)\.In*Eye Movements*\. Elsevier\.
- Cook and Wei \(2019\)Anne E\. Cook and Wencl Wei\. 2019\.[What can eye movements tell us about higher level comprehension?](https://doi.org/10.3390/vision3030045)*Vision*, 3\(3\)\.
- Gastaldi et al\. \(2025\)Juan Luis Gastaldi, John Terilla, Luca Malagutti, Brian DuSell, Tim Vieira, and Ryan Cotterell\. 2025\.[The foundations of tokenization: Statistical and computational concerns](https://openreview.net/forum?id=B5iOSxM2I0)\.In*The International Conference on Learning Representations*\.
- Giulianelli et al\. \(2024a\)Mario Giulianelli, Luca Malagutti, Juan Luis Gastaldi, Brian DuSell, Tim Vieira, and Ryan Cotterell\. 2024a\.[On the proper treatment of tokenization in psycholinguistics](https://aclanthology.org/2024.emnlp-main.1032/)\.In*Proceedings of the 2024 Conference on Empirical Methods in Natural Language Processing*\.
- Giulianelli et al\. \(2024b\)Mario Giulianelli, Andreas Opedal, and Ryan Cotterell\. 2024b\.[Generalized measures of anticipation and responsivity in online language processing](https://aclanthology.org/2024.findings-emnlp.682/)\.In*Findings of the Association for Computational Linguistics: EMNLP 2024*\.
- Giulianelli et al\. \(2026\)Mario Giulianelli, Sarenne Wallbridge, Ryan Cotterell, and Raquel Fernández\. 2026\.[Incremental alternative sampling as a lens into the temporal and representational resolution of linguistic prediction](https://www.sciencedirect.com/science/article/pii/S0749596X25001081)\.*Journal of Memory and Language*, 148\.
- Giulianelli et al\. \(2023\)Mario Giulianelli, Sarenne Wallbridge, and Raquel Fernández\. 2023\.[Information value: Measuring utterance predictability as distance from plausible alternatives](https://aclanthology.org/2023.emnlp-main.343/)\.In*Proceedings of the Conference on Empirical Methods in Natural Language Processing*\.
- Goodkind and Bicknell \(2018\)Adam Goodkind and Klinton Bicknell\. 2018\.[Predictive power of word surprisal for reading times is a linear function of language model quality](https://aclanthology.org/W18-0102)\.In*Proceedings of the Workshop on Cognitive Modeling and Computational Linguistics*\.
- Hale \(2001\)John Hale\. 2001\.[A probabilistic Earley parser as a psycholinguistic model](https://aclanthology.org/N01-1021/)\.In*Proceedings of the Meeting of the North American Chapter of the Association for Computational Linguistics*\.
- Immer et al\. \(2022\)Alexander Immer, Lucas Torroba Hennigen, Vincent Fortuin, and Ryan Cotterell\. 2022\.[Probing as quantifying inductive bias](https://aclanthology.org/2022.acl-long.129/)\.In*Proceedings of the Annual Meeting of the Association for Computational Linguistics \(Volume 1: Long Papers\)*\.
- Just and Carpenter \(1980\)Marcel A\. Just and Patricia A\. Carpenter\. 1980\.[A theory of reading: From eye fixations to comprehension](https://doi.org/10.1037/0033-295X.87.4.329)\.*Psychological Review*, 87\(4\)\.
- Kesgin et al\. \(2024\)H\. Toprak Kesgin, M\. Kaan Yuce, Eren Dogan, M\. Egemen Uzun, Atahan Uz, H\. Emre Seyrek, Ahmed Zeer, and M\. Fatih Amasyali\. 2024\.[Introducing cosmosGPT: Monolingual training for Turkish language models](https://doi.org/10.1109/INISTA62901.2024.10683863)\.In*International Conference on INnovations in Intelligent SysTems and Applications \(INISTA\)*\.
- Kiegeland et al\. \(2026\)Samuel Kiegeland, Vésteinn Snæbjarnarson, Tim Vieira, and Ryan Cotterell\. 2026\.On the proper treatment of units in surprisal theory\.In*Proceedings of the Annual Meeting of the Association for Computational Linguistics*\.
- Kim et al\. \(2025\)Junsol Kim, James Evans, and Aaron Schein\. 2025\.[Linear representations of political perspective emerge in large language models](https://arxiv.org/abs/2503.02080)\.In*The International Conference on Learning Representations*\.
- Kuribayashi et al\. \(2024\)Tatsuki Kuribayashi, Yohei Oseki, and Timothy Baldwin\. 2024\.[Psychometric predictive power of large language models](https://aclanthology.org/2024.findings-naacl.129/)\.In*Findings of the Association for Computational Linguistics: NAACL 2024*\.
- Kuribayashi et al\. \(2025\)Tatsuki Kuribayashi, Yohei Oseki, Souhaib Ben Taieb, Kentaro Inui, and Timothy Baldwin\. 2025\.[Large language models are human\-like internally](https://aclanthology.org/2025.tacl-1.78/)\.*Transactions of the Association for Computational Linguistics*, 13\.
- Levy \(2008\)Roger Levy\. 2008\.[Expectation\-based syntactic comprehension](https://www.sciencedirect.com/science/article/pii/S0010027707001436)\.*Cognition*, 106\(3\)\.
- Luke and Christianson \(2018\)Steven G\. Luke and Kiel Christianson\. 2018\.[The Provo corpus: A large eye\-tracking corpus with predictability norms](https://link.springer.com/article/10.3758/s13428-017-0908-4)\.*Behavior Research Methods*, 50\.
- Meister et al\. \(2022\)Clara Meister, Tiago Pimentel, Thomas Clark, Ryan Cotterell, and Roger Levy\. 2022\.[Analyzing wrap\-up effects through an information\-theoretic lens](https://aclanthology.org/2022.acl-short.3/)\.In*Proceedings of the Annual Meeting of the Association for Computational Linguistics \(Volume 2: Short Papers\)*\.
- Méloux et al\. \(2025\)Maxime Méloux, Silviu Maniu, François Portet, and Maxime Peyrard\. 2025\.[Everything, everywhere, all at once: Is mechanistic interpretability identifiable?](https://openreview.net/forum?id=5IWJBStfU7)In*The International Conference on Learning Representations*\.
- nostalgebraist \(2020\)nostalgebraist\. 2020\.[Interpreting GPT: The logit lens](https://www.lesswrong.com/posts/AcKRB8wDpdaN6v6ru/interpreting-gpt-the-logit-lens)\.
- Oh and Schuler \(2023\)Byung\-Doh Oh and William Schuler\. 2023\.[Why does surprisal from larger transformer\-based language models provide a poorer fit to human reading times?](https://doi.org/10.1162/tacl_a_00548)*Transactions of the Association for Computational Linguistics*, 11\.
- Oh and Schuler \(2024\)Byung\-Doh Oh and William Schuler\. 2024\.[Leading whitespaces of language models’ subword vocabulary pose a confound for calculating word probabilities](https://aclanthology.org/2024.emnlp-main.202/)\.In*Proceedings of the Conference on Empirical Methods in Natural Language Processing*\.
- Opedal et al\. \(2024\)Andreas Opedal, Eleanor Chodroff, Ryan Cotterell, and Ethan Wilcox\. 2024\.[On the role of context in reading time prediction](https://aclanthology.org/2024.emnlp-main.179/)\.In*Proceedings of the Conference on Empirical Methods in Natural Language Processing*\.
- Pimentel and Meister \(2024\)Tiago Pimentel and Clara Meister\. 2024\.[How to compute the probability of a word](https://aclanthology.org/2024.emnlp-main.1020/)\.In*Proceedings of the Conference on Empirical Methods in Natural Language Processing*\.
- Radford et al\. \(2019\)Alec Radford, Jeffrey Wu, Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever\. 2019\.[Language models are unsupervised multitask learners](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf)\.
- Rayner \(1998\)Keith Rayner\. 1998\.[Eye movements in reading and information processing: 20 years of research](http://dx.doi.org/10.1037/0033-2909.124.3.372)\.*Psychological Bulletin*, 124\(3\)\.
- Rayner \(2009\)Keith Rayner\. 2009\.[Eye movements and attention in reading, scene perception, and visual search](https://doi.org/10.1080/17470210902816461)\.*The Quarterly Journal of Experimental Psychology*, 62\(8\)\.
- Rayner and Fischer \(1996\)Keith Rayner and Martin H\. Fischer\. 1996\.[Mindless reading revisited: Eye movements during reading and scanning are different](https://doi.org/10.3758/BF03213106)\.*Perception & Psychophysics*, 58\(5\)\.
- Rayner et al\. \(2000\)Keith Rayner, Gretchen Kambe, and Susan A\. Duffy\. 2000\.[The effect of clause wrap\-up on eye movements during reading](https://doi.org/10.1080/713755934)\.*The Quarterly Journal of Experimental Psychology Section A*, 53\(4\)\.
- Re et al\. \(2025\)Francesco Ignazio Re, Andreas Opedal, Glib Manaiev, Mario Giulianelli, and Ryan Cotterell\. 2025\.[A spatio\-temporal point process for fine\-grained modeling of reading behavior](https://aclanthology.org/2025.acl-long.1474/)\.In*Proceedings of the Annual Meeting of the Association for Computational Linguistics \(Volume 1: Long Papers\)*\.
- Schrimpf et al\. \(2021\)Martin Schrimpf, Idan Asher Blank, Greta Tuckute, Carina Kauf, Eghbal A\. Hosseini, Nancy Kanwisher, Joshua B\. Tenenbaum, and Evelina Fedorenko\. 2021\.[The neural architecture of language: Integrative modeling converges on predictive processing](https://www.pnas.org/doi/abs/10.1073/pnas.2105646118)\.*Proceedings of the National Academy of Sciences*, 118\(45\)\.
- Shain et al\. \(2024\)Cory Shain, Clara Meister, Tiago Pimentel, Ryan Cotterell, and Roger Levy\. 2024\.[Large\-scale evidence for logarithmic effects of word predictability on reading time](https://www.pnas.org/doi/abs/10.1073/pnas.2307876121)\.*Proceedings of the National Academy of Sciences*, 121\(10\)\.
- Shliazhko et al\. \(2024\)Oleh Shliazhko, Alena Fenogenova, Maria Tikhonova, Anastasia Kozlova, Vladislav Mikhailov, and Tatiana Shavrina\. 2024\.[mGPT: Few\-shot learners go multilingual](https://aclanthology.org/2024.tacl-1.4/)\.*Transactions of the Association for Computational Linguistics*, 12\.
- Siegelman et al\. \(2022\)Noam Siegelman, Sascha Schroeder, Cengiz Acartürk, Hee\-Don Ahn, Svetlana Alexeeva, Simona Amenta, Raymond Bertram, Rolando Bonandrini, Marc Brysbaert, and Daria Chernova\. 2022\.[Expanding horizons of cross\-linguistic research on reading: The Multilingual Eye\-movement Corpus \(MECO\)](https://link.springer.com/article/10.3758/s13428-021-01772-6)\.*Behavior Research Methods*, 54\(6\)\.
- Smith and Levy \(2013\)Nathaniel J\. Smith and Roger Levy\. 2013\.[The effect of word predictability on reading time is logarithmic](https://www.sciencedirect.com/science/article/pii/S0010027713000413)\.*Cognition*, 128\(3\)\.
- Vaswani et al\. \(2017\)Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N Gomez, Łukasz Kaiser, and Illia Polosukhin\. 2017\.[Attention is all you need](https://proceedings.neurips.cc/paper/2017/hash/3f5ee243547dee91fbd053c1c4a845aa-Abstract.html)\.In*Advances in Neural Information Processing Systems*, volume 30\.
- Vieira et al\. \(2025\)Tim Vieira, Benjamin Lebrun, Mario Giulianelli, Juan Luis Gastaldi, Brian Dusell, John Terilla, Timothy J\. O’Donnell, and Ryan Cotterell\. 2025\.[From language models over tokens to language models over characters](https://proceedings.mlr.press/v267/vieira25a.html)\.In*Proceedings of the International Conference on Machine Learning*, volume 267\.
- Wendler et al\. \(2024\)Chris Wendler, Veniamin Veselovsky, Giovanni Monea, and Robert West\. 2024\.[Do Llamas work in English? on the latent language of multilingual transformers](https://aclanthology.org/2024.acl-long.820/)\.In*Proceedings of the Annual Meeting of the Association for Computational Linguistics \(Volume 1: Long Papers\)*\.
- White et al\. \(2021\)Jennifer C\. White, Tiago Pimentel, Naomi Saphra, and Ryan Cotterell\. 2021\.[A non\-linear structural probe](https://aclanthology.org/2021.naacl-main.12/)\.In*Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies*\.
- Wilcox et al\. \(2023\)Ethan G\. Wilcox, Tiago Pimentel, Clara Meister, Ryan Cotterell, and Roger P\. Levy\. 2023\.[Testing the predictions of surprisal theory in 11 languages](https://aclanthology.org/2023.tacl-1.82/)\.*Transactions of the Association for Computational Linguistics*, 11\.
- Wilcox et al\. \(2020\)Ethan Gotlieb Wilcox, Jon Gauthier, Jennifer Hu, Peng Qian, and Roger Levy\. 2020\.[On the predictive power of neural language models for human real\-time comprehension behavior](https://arxiv.org/abs/2006.01912)\.In*Proceedings of the Cognitive Science Society*\.

## Appendix AReproducibility

##### Data Sources\.

##### Compute\.

To estimate our predictors \(surprisal, representations, information value, and logit lens\), we used an RTX 2080 Ti GPU with 11GB VRAM for circa 30 hours\. For our predictive modeling experiments, we used the same GPU, but reserving between 1–8GB of VRAM depending on the feature setting \(with mGPT representations requiring the most compute\), for approximately two months of compute time total\.

##### Predictive Modeling\.

We implemented our linear modeling using thestatsmodels131313[https://www\.statsmodels\.org/stable/index\.html](https://www.statsmodels.org/stable/index.html)package\. We attach the tuning hyperparameters in the supplementary material\.

## Appendix BResults for mGPT—Combined Settings

![Refer to caption](https://arxiv.org/html/2604.18712v1/x3.png)Figure 3:MSE for baseline, surprisal, and combined settings \(representations with surprisal, information value, and logit\-lens surprisal\) on the Provo and MECO data across the 24 layers of mGPT and eye\-tracking measures\.Table 2:ΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}of ten\-fold cross\-validation for models trained on baseline features and mGPT\-derived surprisal and representations, as well as combined settings: representations \+ surprisal \(repr\+​s\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}\}\}\}\}\), representations \+ information value \(repr\+​v\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\}\}\), and representations \+ logit\-lens surprisal \(repr\+​sll\{\{\\text\{repr\+\}\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\}\}\)\. Experiments were conducted on the Provo and MECO data across the 24 layers of mGPT and eye\-tracking measures\. For each measure, we report the lowest MSE over layers and the corresponding layer indexℓ\{\{\\ell\}\}\. Bold indicates the best condition per row\. Asterisks \(\*\) denote models that significantly outperform the respective models trained on permuted reading times, according to a one\-sided pairedtt\-test \(α=0\.001\{\{\\alpha\}\}=0\.001\)\. Similarly, bullets \(∙\\bullet\) indicate significance over the baseline\. In combined settings, double daggers \(‡\\ddagger\) indicate significance over representation\-trained models\. None of the models in the combined settings were significant over their respective scalars, e\.g\.repr\+​s\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}\}\}\}\}over surprisal\.
## Appendix CResults for Monolingual Models

### C\.1Individual Predictors

![Refer to caption](https://arxiv.org/html/2604.18712v1/x4.png)Figure 4:MSE for baseline, surprisal, representations, information value, and logit\-lens surprisal on the Provo and English MECO with GPT\-2 and Turkish MECO data with cosmosGPT, across the 12 layers of each language model and eye\-tracking measures\.Table 3:ΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}\(baseline−\-target\) of ten\-fold cross\-validation for models trained on surprisal, representations \(𝐡\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\), information value \(v\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\), and logit\-lens surprisal \(sll\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\) derived from GPT\-2 for the Provo and English MECO data, and from cosmosGPT for Turkish MECO data, across the 12 embedding layers of each language model and eye\-tracking measures\. For each measure, we report the lowest MSE over layers and the corresponding layer indexℓ\{\{\\ell\}\}\. Bold indicates the best condition per row\. Asterisks \(\*\) denote models that significantly outperform the respective models trained on permuted reading times, according to a one\-sided pairedtt\-test \(α=0\.001\{\{\\alpha\}\}=0\.001\)\. Similarly, bullets \(∙\\bullet\) indicate significance over the baseline\.
### C\.2Combined Settings

![Refer to caption](https://arxiv.org/html/2604.18712v1/x5.png)Figure 5:MSE for baseline, surprisal, and combined settings \(representations with surprisal, information value, and logit\-lens surprisal\) on the Provo and English MECO data with GPT\-2 and Turkish MECO data with cosmosGPT, across the 12 layers of each language model and eye\-tracking measures\.Table 4:As in[Table˜3](https://arxiv.org/html/2604.18712#A3.T3), except we now consider theΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}of surprisal, representations, and combined settings: representations \+ surprisal \(repr\+​s\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}\}\}\}\}\), representations \+ information value \(repr\+​v\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\}\}\), and representations \+ logit\-lens surprisal \(repr\+​sll\{\{\\text\{repr\+\}\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\}\}\)\. In combined settings, double daggers \(‡\\ddagger\) indicate statistical significance over models trained on representations\. Note that all of these predictors performed worse than the baseline for first fixation duration on Turkish\.

## Appendix DQualitative Example

![Refer to caption](https://arxiv.org/html/2604.18712v1/x6.png)Figure 6:Gaze duration and its prediction by different mGPT\-derived feature settings\. We show the same excerpt from a document in the MECO dataset in different languages\. The y\-axis represents reading time measured in milliseconds\. True gaze duration is represented by a black line\. The purple line represents the prediction of a linear model trained onℓth\{\{\\ell\}\}^\{\\text\{th\}\}layer representations and standard surprisal\. The chosen layer was the best for this feature setting and eye tracking measure per[Table˜2](https://arxiv.org/html/2604.18712#A2.T2)\. Note that Hebrew is in reverse order, as Hebrew is read and written right\-to\-left, and MECO data has words indexed by the order they are read\.
## Appendix EPermuted Results

### E\.1mGPT—Permuted

#### E\.1\.1Individual Predictors

![Refer to caption](https://arxiv.org/html/2604.18712v1/x7.png)Figure 7:MSE for baseline, surprisal, representations, information value, and logit\-lens surprisal on the Provo and MECO data across the 24 layers of mGPT and eye\-tracking measureswith reading times randomly permuted during training\.Table 5:ΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}\(baseline−\-target\) of ten\-fold cross\-validation for models trained on baseline features and mGPT\-derived surprisal, representations \(𝐡\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\), information value \(v\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\), and logit\-lens surprisal \(sll\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\)with reading times randomly permuted during trainingon the Provo and MECO data across the 24 layers of mGPT\. For each eye\-tracking measure, we report the lowest MSE over layers and the corresponding layer indexℓ\{\{\\ell\}\}\. Bold indicates the best condition per row\. Bullets \(∙\\bullet\) denote models that significantly outperform the baseline, according to a one\-sided pairedtt\-test \(α=0\.001\{\{\\alpha\}\}=0\.001\)\.
#### E\.1\.2Combined Settings

![Refer to caption](https://arxiv.org/html/2604.18712v1/x8.png)Figure 8:MSE for baseline, surprisal, and combined settings \(representations with surprisal, information value, and logit\-lens surprisal\) on the Provo and MECO data across the 24 layers of mGPT and eye\-tracking measureswith reading times randomly permuted during training\.MeasureSurprisalBest𝐡​\(ℓ\)\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\\;\(\{\{\\ell\}\}\)Bestrepr\+​s​\(ℓ\)\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}\}\}\}\}\\;\(\{\{\\ell\}\}\)Bestrepr\+​v​\(ℓ\)\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\}\}\\;\(\{\{\\ell\}\}\)Bestrepr\+​sll​\(ℓ\)\{\{\\text\{repr\+\}\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\}\}\\;\(\{\{\\ell\}\}\)Provo—EnglishFFD\-0\.792\.96\-11\.2316\.03∙\(2\)\-11\.5316\.29\(2\)\-11\.2316\.03∙\(2\)\-10\.4415\.46\(2\)GD\-2\.6410\.052\.87158\.05\(24\)22\.0487\.40\(24\)117\.5798\.88\(24\)23\.4186\.33\(24\)TRT\-84\.88131\.25∙869\.44286\.65\(4\)399\.78508\.09‡\(24\)541\.40287\.46‡\(3\)438\.03572\.60‡\(24\)MECO—EnglishFFD0\.475\.301\.3031\.42\(2\)4\.5526\.14\(2\)1\.3031\.42\(2\)3\.0917\.74‡\(1\)GD\-9\.9031\.13∙17\.1946\.97\(1\)5\.7950\.99\(1\)17\.1946\.97\(1\)\-38\.4258\.76\(1\)TRT81\.39185\.30284\.92441\.58\(24\)379\.85481\.71\(24\)284\.92441\.58\(24\)289\.60447\.32\(24\)MECO—GreekFFD4\.684\.5729\.9031\.19\(1\)29\.7629\.49\(1\)29\.9031\.19\(1\)23\.6129\.14‡\(2\)GD120\.19119\.49438\.84475\.50\(24\)389\.14380\.88\(13\)438\.84475\.50\(24\)427\.27323\.76\(9\)TRT419\.74392\.78540\.161144\.00\(1\)576\.881128\.93\(1\)540\.161144\.00\(1\)333\.44648\.44\(1\)MECO—HebrewFFD4\.8417\.24\-8\.7127\.72\(3\)\-10\.5829\.66\(1\)\-8\.7127\.72\(3\)\-10\.7232\.18\(3\)GD13\.4628\.99\-41\.55390\.14\(17\)\-68\.35181\.72‡\(2\)\-41\.55390\.14\(17\)\-70\.53166\.65‡\(2\)TRT421\.20992\.75\-12\.781579\.43\(12\)99\.431393\.98\(11\)\-12\.781579\.43\(12\)\-38\.391493\.58‡\(11\)MECO—RussianFFD\-3\.287\.8716\.7723\.60\(1\)11\.9125\.19\(1\)16\.7723\.60\(1\)15\.0722\.85\(1\)GD\-1\.6974\.9047\.14159\.98\(24\)104\.39103\.45\(2\)47\.14159\.98\(24\)124\.32196\.86\(9\)TRT\-111\.87260\.18∙\-277\.17724\.43\(1\)\-395\.251043\.14∙\(24\)\-277\.17724\.43\(1\)\-347\.731064\.00∙\(24\)MECO—TurkishFFD2\.904\.5245\.9130\.86\(1\)38\.6132\.40\(2\)45\.9130\.86\(1\)39\.5640\.33\(2\)GD95\.1399\.6935\.26407\.39\(24\)51\.21413\.28\(24\)35\.26407\.39\(24\)35\.81408\.01\(24\)TRT\-469\.13886\.552484\.252163\.78\(6\)2291\.412137\.59\(1\)2484\.252163\.78\(6\)2468\.112508\.89‡\(11\)Table 6:ΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}\(baseline−\-target\) of ten\-fold cross\-validation for models trained on baseline features and mGPT\-derived surprisal, as well as combined settings: representations \+ surprisal \(repr\+​s\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}\}\}\}\}\), representations \+ information value \(repr\+​v\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\}\}\), and representations \+ logit\-lens surprisal \(repr\+​sll\{\{\\text\{repr\+\}\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\}\}\),with reading times randomly permuted during trainingon the Provo and MECO data across the 24 layers of mGPT\. For each eye\-tracking measure, we report the lowest MSE over layers and the corresponding layer indexℓ\{\{\\ell\}\}\. Bold indicates the best condition per row\. Bullets \(∙\\bullet\) denote models that significantly outperform the baseline, according to a one\-sided pairedtt\-test \(α=0\.001\{\{\\alpha\}\}=0\.001\)\. In combined settings, double daggers \(‡\\ddagger\) indicate statistical significance over models trained on representations\.

### E\.2Monolingual Models—Permuted

#### E\.2\.1Individual Predictors

![Refer to caption](https://arxiv.org/html/2604.18712v1/x9.png)Figure 9:MSE for baseline, surprisal, representations, information value, and logit\-lens surprisal on the Provo and English MECO with GPT\-2 and Turkish MECO data with cosmosGPT, across the 12 layers of each language model and eye\-tracking measureswith reading times randomly permuted during training\.Table 7:ΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}\(baseline−\-target\) of ten\-fold cross\-validation for models trained on baseline features and surprisal, representations \(𝐡\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}\), information value \(v\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\), and logit\-lens surprisal \(sll\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\) derived from GPT\-2 for the Provo and English MECO data, and from cosmosGPT for Turkish MECO datawith reading times randomly permuted during training\. For each measure, we report the lowest MSE over layers and the corresponding layer indexℓ\{\{\\ell\}\}\. Bold indicates the best condition per row\. Bullets \(∙\\bullet\) denote models that significantly outperform the baseline, according to a one\-sided pairedtt\-test \(α=0\.001\{\{\\alpha\}\}=0\.001\)\.
#### E\.2\.2Combined Settings

![Refer to caption](https://arxiv.org/html/2604.18712v1/x10.png)Figure 10:MSE for baseline, surprisal, and combined settings \(representations with surprisal, information value, and logit\-lens surprisal\) on the Provo and English MECO data with GPT\-2 and Turkish MECO data with cosmosGPT, across the 12 layers of each language model and eye\-tracking measureswith reading times randomly permuted during training\.Table 8:ΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}\(baseline−\-target\) of ten\-fold cross\-validation for models trained on baseline features and surprisal, as well as combined settings: representations \+ surprisal \(repr\+​s\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}\}\}\}\}\), representations \+ information value \(repr\+​v\{\{\\text\{repr\+\}\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\}\}\), and representations \+ logit\-lens surprisal \(repr\+​sll\{\{\\text\{repr\+\}\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\}\}\) derived from GPT\-2 for the Provo and English MECO data, and from cosmosGPT for Turkish MECO datawith reading times randomly permuted during training\. For each measure, we report the lowest MSE over layers and the corresponding layer indexℓ\{\{\\ell\}\}\. Bold indicates the best condition per row\. Bullets \(∙\\bullet\) denote models that significantly outperform the baseline, according to a one\-sided pairedtt\-test \(α=0\.001\{\{\\alpha\}\}=0\.001\)\. Similarly, for combined settings, double daggers \(‡\\ddagger\) indicate significance over representation\-trained models\.

## Appendix FLinear Mixed\-Effects Models

Table 9:ΔMSE\{\{\\Delta\_\{\\text\{MSE\}\}\}\}of 10\-fold cross\-validation using linear mixed\-effects models \(LMMs\) on per\-participant MECO reading times with mGPT\-derived surprisal, representations \(𝐡\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}\\mathbf\{h\}\}\}\}; PCA withK=25K\{=\}25components\), logit\-lens surprisal \(sll\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}s\}^\{\\textsc\{ll\}\}\), and information value \(v\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}v\}\}\}\)\. Unlike the main analyses \([Table˜1](https://arxiv.org/html/2604.18712#S6.T1)\), which use regularized regression on reading times averaged across participants, here we retain individual observations and fit LMMs with random intercepts for subjects and documentsrs,i=𝐱i⊤​𝜷\+bs\+ui\+εs,i\{\{\{\\color\[rgb\]\{0,0\.3515625,0\.55078125\}r\}\}\}\_\{s,i\}=\\mathbf\{x\}\_\{i\}^\{\\top\}\\bm\{\\beta\}\+b\_\{s\}\+u\_\{i\}\+\\varepsilon\_\{s,i\}, wherebs∼𝒩​\(0,σsubj2\)b\_\{s\}\\sim\\mathcal\{N\}\(0,\\sigma^\{2\}\_\{\\text\{subj\}\}\)andui∼𝒩​\(0,σitem2\)u\_\{i\}\\sim\\mathcal\{N\}\(0,\\sigma^\{2\}\_\{\\text\{item\}\}\)\. Models are fit by maximum likelihood withlme4; test\-set predictions use fixed effects only\. For each predictor type, we report the best\-performing layer and its indexℓ\{\{\\ell\}\}\. Bold indicates the best condition per row\. Asterisks \(\*\) denote models that significantly outperform the respective models trained on permuted reading times, according to a one\-sided pairedtt\-test \(α=0\.001\{\{\\alpha\}\}=0\.001\)\. Bullets \(∙\\bullet\) indicate significance over the baseline\. Note that the representation results for Russian exhibit high variance across folds, likely due to overfitting of the 25 PCA components on the smaller Russian dataset\.

Similar Articles

Brain Score Tracks Shared Properties of Languages: Evidence from Many Natural Languages and Structured Sequences

arXiv cs.CL

This paper investigates whether Brain Score, a metric comparing language model representations to human fMRI activations during reading, is truly capturing human-like language processing or merely structural similarity. The researchers train language models on diverse natural languages and non-linguistic structured data (genome, Python, nested parentheses), finding that models trained on different languages and even non-linguistic sequences achieve similar Brain Score performance, suggesting the metric may not be sensitive enough to distinguish human-specific processing.

Improving understanding with language

MIT News — Artificial Intelligence

This article profiles MIT senior Olivia Honeycutt, highlighting her interdisciplinary research at the intersection of linguistics, computation, and cognition, with a focus on comparing human language processing with large language models.

What do Language Models Learn and When? The Implicit Curriculum Hypothesis

Hugging Face Daily Papers

This paper proposes the Implicit Curriculum Hypothesis, demonstrating that language model pretraining follows a structured, compositional curriculum where capabilities emerge consistently across architectures and can be predicted from internal representations. The authors validate this through designed tasks spanning retrieval, morphology, coreference, reasoning, and mathematics, finding highly consistent emergence orderings (ρ=0.81) across four model families.

Instructions shape Production of Language, not Processing

arXiv cs.CL

This research paper investigates how instructions influence large language models, finding that they primarily affect the language production stage rather than the processing stage. The study uses attention-based interventions and probing to demonstrate this asymmetry across various model families and tasks.

Negative Before Positive: Asymmetric Valence Processing in Large Language Models

arXiv cs.CL

This paper investigates how large language models process emotional valence through mechanistic interpretability. Using activation patching and steering on three open-source LLMs, the authors find that negative valence is localized to early layers while positive valence peaks in mid-to-late layers, and they validate this through topic-controlled flip tests.