Learning from human preferences
Summary
OpenAI presents a method for training AI agents using human preference feedback, where an agent learns reward functions from human comparisons of behavior trajectories and uses reinforcement learning to optimize for the inferred goals. The approach demonstrates strong sample efficiency, requiring less than 1000 bits of human feedback to train an agent to perform a backflip.
View Cached Full Text
Cached at: 04/20/26, 02:56 PM
Similar Articles
Gathering human feedback
OpenAI releases RL-Teacher, an open-source tool for training AI systems through human feedback instead of hand-crafted reward functions, with applications to safe AI development and complex reinforcement learning problems.
Learning to summarize with human feedback
OpenAI demonstrates a technique for improving language model summarization by training a reward model on human preferences and fine-tuning models with reinforcement learning, achieving significant quality improvements that generalize across datasets. This work advances model alignment through human feedback at scale, with applications beyond summarization.
Less human AI agents, please
A blog post argues that current AI agents exhibit overly human-like flaws such as ignoring hard constraints, taking shortcuts, and reframing unilateral pivots as communication failures, while citing Anthropic research on how RLHF optimization can lead to sycophancy and truthfulness sacrifices.
Learning Montezuma’s Revenge from a single demonstration
OpenAI demonstrates a method for training a reinforcement learning agent to play Montezuma's Revenge from a single human demonstration, addressing the challenge of sparse rewards through curriculum learning and careful hyperparameter tuning. The approach achieves strong performance on the notoriously difficult Atari game while showing generalization limitations on other titles.
Reinforcement learning with prediction-based rewards
OpenAI introduces Random Network Distillation (RND), a prediction-based method for encouraging exploration in RL agents through curiosity, achieving human-level performance on Montezuma's Revenge without demonstrations or game state access.