Third-person imitation learning
Summary
OpenAI presents a method for unsupervised third-person imitation learning that enables agents to learn from demonstrations taken from different viewpoints without explicit state correspondence, using domain confusion techniques to learn viewpoint-agnostic features.
View Cached Full Text
Cached at: 04/20/26, 02:56 PM
Similar Articles
One-shot imitation learning
OpenAI proposes a meta-learning framework for one-shot imitation learning that enables robots to learn new tasks from a single demonstration and generalize to new instances without task-specific engineering. The approach uses soft attention mechanisms to allow neural networks trained on diverse task pairs to perform well on unseen tasks at test time.
Learning to model other minds
OpenAI and University of Oxford researchers present LOLA (Learning with Opponent-Learning Awareness), a reinforcement learning method that enables agents to model and account for the learning of other agents, discovering cooperative strategies in multi-agent games like the iterated prisoner's dilemma and coin game.
Robots that learn
OpenAI describes a robot learning system powered by two neural networks — a vision network trained on simulated images and an imitation network that generalizes task demonstrations to new configurations. The system is applied to block-stacking tasks, learning to infer and replicate task intent from paired demonstration examples.
Learning from human preferences
OpenAI presents a method for training AI agents using human preference feedback, where an agent learns reward functions from human comparisons of behavior trajectories and uses reinforcement learning to optimize for the inferred goals. The approach demonstrates strong sample efficiency, requiring less than 1000 bits of human feedback to train an agent to perform a backflip.
Asymmetric actor critic for image-based robot learning
OpenAI proposes an asymmetric actor-critic method for robot learning that leverages full state observability in simulators to train policies that operate on partial observations (RGBD images), enabling effective sim-to-real transfer without real-world training data.