theoretical-ml

Tag

Cards List
#theoretical-ml

Optimistic Dual Averaging Unifies Modern Optimizers

arXiv cs.LG · 2d ago Cached

This paper introduces SODA, a generalization of Optimistic Dual Averaging that unifies various modern optimizers like Muon and Lion. It proposes a practical wrapper that improves performance across different scales without requiring additional hyperparameter tuning for weight decay.

0 favorites 0 likes
#theoretical-ml

FragileFlow: Spectral Control of Correct-but-Fragile Predictions for Foundation Model Robustness

arXiv cs.CL · 3d ago Cached

This paper introduces FragileFlow, a plug-in regularizer that improves the robustness of LLMs and VLMs by controlling 'correct-but-fragile' predictions through spectral analysis and PAC-Bayes bounds.

0 favorites 0 likes
#theoretical-ml

Geometric Factual Recall in Transformers

Hugging Face Daily Papers · 3d ago Cached

This paper introduces a theoretical framework for geometric factual recall in transformers, demonstrating that embeddings can encode relational structure via linear superpositions while MLPs act as selectors. It provides empirical and theoretical evidence that this mechanism allows for efficient memorization of facts and multi-hop queries.

0 favorites 0 likes
#theoretical-ml

On the Divergence of Differential Temporal Difference Learning without Local Clocks

arXiv cs.LG · 4d ago Cached

This paper addresses an open problem in reinforcement learning by providing a counterexample showing that differential temporal difference learning can diverge when using a global clock, despite converging with a local clock, in average-reward settings.

0 favorites 0 likes
#theoretical-ml

A Finite-Iteration Theory for Asynchronous Categorical Distributional Temporal-Difference Learning

arXiv cs.LG · 4d ago Cached

This paper presents a finite-iteration theory for asynchronous categorical distributional temporal-difference learning, bridging the gap between existing theoretical frameworks and practical online implementations.

0 favorites 0 likes
#theoretical-ml

A Rod Flow Model for Adam at the Edge of Stability

arXiv cs.LG · 4d ago Cached

This paper introduces a 'rod flow' model for Adam and other adaptive optimizers to better analyze their behavior at the edge of stability. It extends continuous-time modeling to momentum methods, showing improved accuracy in tracking discrete iterates compared to stable flow models.

0 favorites 0 likes
#theoretical-ml

A Theory of Online Learning with Autoregressive Chain-of-Thought Reasoning

arXiv cs.LG · 4d ago Cached

This academic paper develops a theoretical framework for online learning with autoregressive chain-of-thought reasoning, analyzing mistake bounds under end-to-end and trajectory supervision models.

0 favorites 0 likes
#theoretical-ml

A Closed-Form Upper Bound for Admissible Learning-Rate Steps in Belief-Space Dynamics

arXiv cs.LG · 4d ago Cached

This paper derives a closed-form upper bound for admissible learning-rate steps in belief-space dynamics using KL divergence and Bregman geometry, focusing on cross-entropy classification.

0 favorites 0 likes
#theoretical-ml

@probnstat: One theorem every ML engineer should know: The Johnson–Lindenstrauss Lemma. It states that high-dimensional data can be…

X AI KOLs Following · 5d ago

This post highlights the Johnson–Lindenstrauss Lemma, explaining its importance for ML engineers in understanding dimensionality reduction, random projections, and embedding efficiency.

0 favorites 0 likes
#theoretical-ml

Best Arm Identification in Generalized Linear Bandits via Hybrid Feedback

arXiv cs.AI · 2026-05-08 Cached

This paper introduces a hybrid Track-and-Stop algorithm for best arm identification in generalized linear bandits that unifies absolute and relative feedback. The authors propose a likelihood-ratio-based confidence sequence to adaptively allocate queries, demonstrating improved sample efficiency over baseline methods.

0 favorites 0 likes
#theoretical-ml

Imitation Learning: How well does it perform?

ML at Berkeley · 2021-04-28 Cached

This article analyzes a recent research paper that provides a taxonomical framework for imitation learning algorithms, categorizing them by moment matching techniques and analyzing their theoretical imitation gap bounds.

0 favorites 0 likes
← Back to home

Submit Feedback