Tag
This paper derives tight theoretical bounds for human-AI teams, proving when confidence-based aggregation leads to complementarity and establishing impossibility results under specific error correlations.
This arXiv preprint proposes a unified measure-theoretic framework for understanding diffusion, score-based, and flow matching generative models. It establishes connections between these methods via continuity/Fokker-Planck equations and analyzes their sampling schemes and theoretical guarantees.
This paper proposes a unified framework for energy-based generative models by casting density transport as a nonlinear control problem with KL divergence as a Lyapunov function. It derives finite-step stopping criteria and demonstrates how nonlinear control theory tools can be applied to static scalar energy models.
This article explains how incorporating Shannon entropy into reinforcement learning objectives creates more robust agents capable of handling unexpected or adversarial changes in rewards and dynamics.
This paper extends the study of computational hardness in learning robust classifiers, showing that efficient robust classification can be impossible even when unbounded robust classifiers exist, and establishing a win-win result: either an efficient robust classifier can be learned, or new cryptographic primitives can be constructed.