A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models

OpenAI Blog Papers

Summary

This paper establishes mathematical equivalences between generative adversarial networks (GANs), inverse reinforcement learning (IRL), and energy-based models (EBMs), demonstrating that certain IRL methods are equivalent to GANs with evaluable generator density. The work bridges three research communities to enable knowledge transfer for developing more stable and scalable algorithms.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:45 PM

# A connection between generative adversarial networks, inverse reinforcement learning, and energy-based models Source: [https://openai.com/index/a-connection-between-generative-adversarial-networks-inverse-reinforcement-learning-and-energy-based-models/](https://openai.com/index/a-connection-between-generative-adversarial-networks-inverse-reinforcement-learning-and-energy-based-models/) ## Abstract Generative adversarial networks \(GANs\) are a recently proposed class of generative models in which a generator is trained to optimize a cost function that is being simultaneously learned by a discriminator\. While the idea of learning cost functions is relatively new to the field of generative modeling, learning costs has long been studied in control and reinforcement learning \(RL\) domains, typically for imitation learning from demonstrations\. In these fields, learning cost function underlying observed behavior is known as inverse reinforcement learning \(IRL\) or inverse optimal control\. While at first the connection between cost learning in RL and cost learning in generative modeling may appear to be a superficial one, we show in this paper that certain IRL methods are in fact mathematically equivalent to GANs\. In particular, we demonstrate an equivalence between a sample\-based algorithm for maximum entropy IRL and a GAN in which the generator's density can be evaluated and is provided as an additional input to the discriminator\. Interestingly, maximum entropy IRL is a special case of an energy\-based model\. We discuss the interpretation of GANs as an algorithm for training energy\-based models, and relate this interpretation to other recent work that seeks to connect GANs and EBMs\. By formally highlighting the connection between GANs, IRL, and EBMs, we hope that researchers in all three communities can better identify and apply transferable ideas from one domain to another, particularly for developing more stable and scalable algorithms: a major challenge in all three domains\.

Similar Articles

Implicit generation and generalization methods for energy-based models

OpenAI Blog

OpenAI presents implicit generation and generalization methods for energy-based models (EBMs) that use Langevin dynamics for iterative refinement to generate samples without explicit generator networks. The approach offers advantages including adaptive computation time, flexibility in learning disconnected data modes, and built-in compositionality through product of experts.

Energy Generative Modeling: A Lyapunov-based Energy Matching Perspective

arXiv cs.LG

This paper proposes a unified framework for energy-based generative models by casting density transport as a nonlinear control problem with KL divergence as a Lyapunov function. It derives finite-step stopping criteria and demonstrates how nonlinear control theory tools can be applied to static scalar energy models.

Learning concepts with energy functions

OpenAI Blog

OpenAI presents a technique using energy functions to enable agents to learn and extract abstract concepts (visual, spatial, temporal, social) from tasks, then transfer these concepts to solve related tasks in different domains without retraining. The approach uses energy-based models with neural networks to perform both generation and recognition of concepts.

Generative models

OpenAI Blog

OpenAI publishes an overview of generative models as an approach to developing machine understanding of the world, explaining how these models work by learning to generate data similar to their training sets and their potential applications across various domains.

AEM: Adaptive Entropy Modulation for Multi-Turn Agentic Reinforcement Learning

Hugging Face Daily Papers

This paper introduces AEM, a supervision-free method for agentic reinforcement learning that adapts entropy dynamics at the response level to improve exploration-exploitation trade-offs. It demonstrates performance gains on benchmarks like ALFWorld and SWE-bench by aligning uncertainty estimation with action granularity.