Variance reduction for policy gradient with action-dependent factorized baselines

OpenAI Blog Papers

Summary

OpenAI researchers derive a bias-free action-dependent baseline for variance reduction in policy gradient methods, demonstrating improved learning efficiency on high-dimensional control tasks, multi-agent, and partially observed environments.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:56 PM

# Variance reduction for policy gradient with action-dependent factorized baselines Source: [https://openai.com/index/variance-reduction-for-policy-gradient-with-action-dependent-factorized-baselines/](https://openai.com/index/variance-reduction-for-policy-gradient-with-action-dependent-factorized-baselines/) OpenAI## Abstract Policy gradient methods have enjoyed great success in deep reinforcement learning but suffer from high variance of gradient estimates\. The high variance problem is particularly exasperated in problems with long horizons or high\-dimensional action spaces\. To mitigate this issue, we derive a bias\-free action\-dependent baseline for variance reduction which fully exploits the structural form of the stochastic policy itself and does not make any additional assumptions about the MDP\. We demonstrate and quantify the benefit of the action\-dependent baseline through both theoretical analysis as well as numerical results, including an analysis of the suboptimality of the optimal state\-dependent baseline\. The result is a computationally efficient policy gradient algorithm, which scales to high\-dimensional control problems, as demonstrated by a synthetic 2000\-dimensional target matching task\. Our experimental results indicate that action\-dependent baselines allow for faster learning on standard reinforcement learning benchmarks and high\-dimensional hand manipulation and synthetic tasks\. Finally, we show that the general idea of including additional information in baselines for improved variance reduction can be extended to partially observed and multi\-agent tasks\.

Similar Articles

Evolved Policy Gradients

OpenAI Blog

OpenAI introduces Evolved Policy Gradients (EPG), a meta-learning approach that learns loss functions through evolution rather than learning policies directly, enabling RL agents to generalize better across tasks by leveraging prior experience similar to how humans transfer skills.

Better exploration with parameter noise

OpenAI Blog

OpenAI presents parameter noise, a technique that adds adaptive noise to neural network policy parameters rather than action spaces, enabling agents to learn tasks significantly faster than traditional action noise approaches. The method achieves 2x faster learning on HalfCheetah and represents a middle ground between evolution strategies and deep RL approaches like TRPO and DDPG.

OpenAI Baselines: ACKTR & A2C

OpenAI Blog

OpenAI releases ACKTR and A2C algorithms as part of its Baselines library, with ACKTR demonstrating improved sample complexity through natural gradient descent while maintaining computational efficiency comparable to first-order methods.

Equivalence between policy gradients and soft Q-learning

OpenAI Blog

OpenAI researchers demonstrate a precise mathematical equivalence between soft (entropy-regularized) Q-learning and policy gradient methods in reinforcement learning, providing theoretical insight into why Q-learning works despite inaccurate value estimates. They validate this equivalence empirically on the Atari benchmark and show a Q-learning method can closely match A3C's learning dynamics.

OpenAI Baselines: DQN

OpenAI Blog

OpenAI shares lessons learned while implementing DQN as part of their Baselines project, covering debugging tips such as greyscale calibration issues, hyperparameter tuning, and correct interpretation of the Huber Loss in the original Nature paper.