Learning dexterity
Summary
OpenAI announces Dactyl, a system that learns robotic hand dexterity through simulation and reinforcement learning, using LSTMs to generalize across different physical environments and the Rapid PPO implementation to train policies that transfer to real-world manipulation tasks.
View Cached Full Text
Cached at: 04/20/26, 02:46 PM
Similar Articles
Robots that learn
OpenAI describes a robot learning system powered by two neural networks — a vision network trained on simulated images and an imitation network that generalizes task demonstrations to new configurations. The system is applied to block-stacking tasks, learning to infer and replicate task intent from paired demonstration examples.
Solving Rubik’s Cube with a robot hand
OpenAI developed a robot hand capable of solving a Rubik's Cube using a novel technique called Automatic Domain Randomization (ADR), which progressively increases simulation difficulty to enable effective transfer of learned behaviors from simulation to the real world.
RLDX-1 Technical Report
RLDX-1 is a general-purpose robotic policy for dexterous manipulation that uses a Multi-Stream Action Transformer architecture to integrate heterogeneous modalities, outperforming existing VLA models in real-world tasks.
Multi-Goal Reinforcement Learning: Challenging robotics environments and request for research
OpenAI introduces a suite of challenging multi-goal reinforcement learning tasks for robotics using Fetch and Shadow Dexterous Hand hardware, integrated with OpenAI Gym, along with research directions for improving RL algorithms.
DeVI: Physics-based Dexterous Human-Object Interaction via Synthetic Video Imitation
DeVI introduces a framework that turns text-conditioned synthetic videos into physically plausible dexterous robot control via a hybrid 3D-2D tracking reward, enabling zero-shot generalization to unseen objects.