Tag
This paper presents a switching-system theory for Q-learning with linear function approximation, using joint spectral radius to analyze convergence stability under deterministic, i.i.d., and Markovian observations.
This paper addresses an open problem in reinforcement learning by providing a counterexample showing that differential temporal difference learning can diverge when using a global clock, despite converging with a local clock, in average-reward settings.