Learning with opponent-learning awareness

OpenAI Blog Papers

Summary

OpenAI presents LOLA (Learning with Opponent-Learning Awareness), a multi-agent reinforcement learning method where agents shape the anticipated learning of other agents. The approach demonstrates emergence of cooperation in iterated prisoner's dilemma and convergence to Nash equilibrium in game-theoretic settings.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:45 PM

# Learning with opponent-learning awareness Source: [https://openai.com/index/learning-with-opponent-learning-awareness/](https://openai.com/index/learning-with-opponent-learning-awareness/) ## Abstract Multi\-agent settings are quickly gathering importance in machine learning\. This includes a plethora of recent work on deep multi\-agent reinforcement learning, but also can be extended to hierarchical RL, generative adversarial networks and decentralised optimisation\. In all these settings the presence of multiple learning agents renders the training problem non\-stationary and often leads to unstable training or undesired final results\. We present Learning with Opponent\-Learning Awareness \(LOLA\), a method in which each agent shapes the anticipated learning of the other agents in the environment\. The LOLA learning rule includes a term that accounts for the impact of one agent's policy on the anticipated parameter update of the other agents\. Results show that the encounter of two LOLA agents leads to the emergence of tit\-for\-tat and therefore cooperation in the iterated prisoners' dilemma, while independent learning does not\. In this domain, LOLA also receives higher payouts compared to a naive learner, and is robust against exploitation by higher order gradient\-based methods\. Applied to repeated matching pennies, LOLA agents converge to the Nash equilibrium\. In a round robin tournament we show that LOLA agents successfully shape the learning of a range of multi\-agent learning algorithms from literature, resulting in the highest average returns on the IPD\. We also show that the LOLA update rule can be efficiently calculated using an extension of the policy gradient estimator, making the method suitable for model\-free RL\. The method thus scales to large parameter and input spaces and nonlinear function approximators\. We apply LOLA to a grid world task with an embedded social dilemma using recurrent policies and opponent modelling\. By explicitly considering the learning of the other agent, LOLA agents learn to cooperate out of self\-interest\. The code is at[this http URL⁠\(opens in a new window\)](https://github.com/alshedivat/lola)\.

Similar Articles

Learning to model other minds

OpenAI Blog

OpenAI and University of Oxford researchers present LOLA (Learning with Opponent-Learning Awareness), a reinforcement learning method that enables agents to model and account for the learning of other agents, discovering cooperative strategies in multi-agent games like the iterated prisoner's dilemma and coin game.

Learning to cooperate, compete, and communicate

OpenAI Blog

OpenAI presents research on multi-agent reinforcement learning environments where agents learn to cooperate, compete, and communicate. The paper introduces MADDPG (Multi-Agent DDPG), a centralized critic approach that enables agents to learn collaborative strategies and communication protocols more effectively than traditional decentralized methods.

Learning policy representations in multiagent systems

OpenAI Blog

OpenAI researchers propose a general framework for learning representations of agent policies in multiagent systems using minimal interaction data, casting the problem as representation learning with applications to competitive control and cooperative communication environments.

Learning to communicate

OpenAI Blog

OpenAI researchers demonstrate that cooperative AI agents can develop their own grounded and compositional language through reinforcement learning in simple worlds. The agents learn to communicate by being rewarded for achieving goals that require coordination, creating shared symbolic languages to coordinate behavior.

Preference Estimation via Opponent Modeling in Multi-Agent Negotiation

arXiv cs.CL

This paper proposes a novel preference estimation method that integrates natural language information from LLMs into a structured Bayesian opponent modeling framework for multi-agent negotiation. The approach leverages LLMs to extract qualitative cues from utterances and convert them into probabilistic formats, demonstrating improved agreement rates and preference estimation accuracy on multi-party negotiation benchmarks.