OpenAI standardizes on PyTorch

OpenAI Blog News

Summary

OpenAI announces it is standardizing on PyTorch as its primary deep learning framework to improve research productivity and GPU performance at scale. As part of the move, they released a PyTorch version of Spinning Up in Deep RL and plan to open-source PyTorch bindings for their blocksparse kernels.

We are standardizing OpenAI’s deep learning framework on PyTorch.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:55 PM

# OpenAI standardizes on PyTorch Source: [https://openai.com/index/openai-pytorch/](https://openai.com/index/openai-pytorch/) We are standardizing OpenAI’s deep learning framework on[PyTorch⁠\(opens in a new window\)](https://pytorch.org/)\. In the past, we implemented projects in many frameworks depending on their relative strengths\. We’ve now chosen to standardize to make it easier for our team to create and share optimized implementations of our models\. As part of this move, we’ve just released a[PyTorch\-enabled version⁠\(opens in a new window\)](https://github.com/openai/spinningup)of[Spinning Up in Deep RL⁠](https://openai.com/index/spinning-up-in-deep-rl/), an open\-source educational resource produced by OpenAI that makes it easier to learn about deep reinforcement learning\. We are also in the process of writing PyTorch bindings for our highly\-optimized[blocksparse kernels⁠](https://openai.com/index/block-sparse-gpu-kernels/), and will open\-source those bindings in upcoming months\. The main reason we’ve chosen PyTorch is to increase our research productivity at scale on GPUs\. It is very easy to try and execute new research ideas in PyTorch; for example, switching to PyTorch decreased our iteration time on research ideas in generative modeling from weeks to days\. We’re also excited to be joining a rapidly\-growing developer community, including organizations like Facebook and Microsoft, in pushing scale and performance on GPUs\. Going forward we’ll primarily use PyTorch as our deep learning framework but sometimes use other ones when there’s a specific technical reason to do so\. Many of our teams have already made the switch, and we look forward to contributing to the PyTorch community in upcoming months\.

Similar Articles

Spinning Up in Deep RL

OpenAI Blog

OpenAI released 'Spinning Up in Deep RL,' an educational toolkit featuring introductory materials, curated paper lists, and clean standalone implementations of key RL algorithms (VPG, TRPO, PPO, DDPG, TD3, SAC) designed to help newcomers learn deep reinforcement learning from scratch.

OpenAI and Microsoft

OpenAI Blog

OpenAI and Microsoft announced a partnership to run OpenAI's large-scale experiments on Azure, making it the primary cloud platform for OpenAI's deep learning and AI research. The collaboration will leverage Azure's GPU infrastructure to accelerate AI research and share results with the broader community.

Infrastructure for deep learning

OpenAI Blog

OpenAI shares their deep learning infrastructure approach and open-sources kubernetes-ec2-autoscaler, a batch-optimized scaling manager for Kubernetes, emphasizing how infrastructure quality multiplies research progress.

OpenAI Gym Beta

OpenAI Blog

OpenAI releases OpenAI Gym, a public beta toolkit for developing and comparing reinforcement learning algorithms with a growing suite of environments and a platform for reproducible research. The toolkit aims to standardize RL benchmarks and address the lack of diverse, easy-to-use environments for the research community.

Scaling AI for everyone

OpenAI Blog

OpenAI announces $110B in new investment at a $730B pre-money valuation, including major funding from SoftBank, NVIDIA, and Amazon, along with strategic partnerships to expand compute capacity and global reach for AI products. The funding aims to accelerate deployment of frontier AI across consumers, developers, and enterprises worldwide.