deep-learning

Tag

Cards List
#deep-learning

Nonlinear computation in deep linear networks

OpenAI Blog · 2017-09-29 Cached

OpenAI research explores how nonlinear computation can emerge in deep linear networks, presenting theoretical and empirical analysis with code examples using TensorFlow.

0 favorites 0 likes
#deep-learning

Proximal Policy Optimization

OpenAI Blog · 2017-07-20 Cached

OpenAI introduces Proximal Policy Optimization (PPO), a reinforcement learning algorithm that matches or outperforms state-of-the-art methods while being simpler to implement and tune. PPO uses a novel clipped objective function to constrain policy updates and has since become OpenAI's default RL algorithm.

0 favorites 0 likes
#deep-learning

OpenAI Baselines: DQN

OpenAI Blog · 2017-05-24 Cached

OpenAI shares lessons learned while implementing DQN as part of their Baselines project, covering debugging tips such as greyscale calibration issues, hyperparameter tuning, and correct interpretation of the Huber Loss in the original Nature paper.

0 favorites 0 likes
#deep-learning

Equivalence between policy gradients and soft Q-learning

OpenAI Blog · 2017-04-21 Cached

OpenAI researchers demonstrate a precise mathematical equivalence between soft (entropy-regularized) Q-learning and policy gradient methods in reinforcement learning, providing theoretical insight into why Q-learning works despite inaccurate value estimates. They validate this equivalence empirically on the Atari benchmark and show a Q-learning method can closely match A3C's learning dynamics.

0 favorites 0 likes
#deep-learning

Spam detection in the physical world

OpenAI Blog · 2017-04-01 Cached

OpenAI demonstrates that domain randomization—randomly varying colors, textures, lighting, and camera settings in simulated training data—enables deep learning models to effectively transfer from simulation to real-world robotic spam detection tasks without retraining from scratch.

0 favorites 0 likes
#deep-learning

Prediction and control with temporal segment models

OpenAI Blog · 2017-03-12 Cached

OpenAI introduces a method for learning complex nonlinear system dynamics using deep generative models over temporal segments, enabling stable long-horizon predictions and differentiable trajectory optimization for model-based control.

0 favorites 0 likes
#deep-learning

PixelCNN++: Improving the PixelCNN with discretized logistic mixture likelihood and other modifications

OpenAI Blog · 2017-01-19 Cached

PixelCNN++ introduces several architectural improvements to PixelCNN including discretized logistic mixture likelihood, downsampling, and shortcut connections, achieving state-of-the-art log likelihood results on CIFAR-10.

0 favorites 0 likes
#deep-learning

OpenAI and Microsoft

OpenAI Blog · 2016-11-15 Cached

OpenAI and Microsoft announced a partnership to run OpenAI's large-scale experiments on Azure, making it the primary cloud platform for OpenAI's deep learning and AI research. The collaboration will leverage Azure's GPU infrastructure to accelerate AI research and share results with the broader community.

0 favorites 0 likes
#deep-learning

Variational lossy autoencoder

OpenAI Blog · 2016-11-08 Cached

OpenAI researchers present a Variational Lossy Autoencoder (VLAE) that combines VAEs with neural autoregressive models (RNN, MADE, PixelRNN/CNN) to learn controllable global representations, achieving state-of-the-art results on MNIST, OMNIGLOT, and Caltech-101 Silhouettes density estimation tasks.

0 favorites 0 likes
#deep-learning

Transfer from simulation to real world through learning deep inverse dynamics model

OpenAI Blog · 2016-10-11 Cached

This paper proposes a method to bridge the simulation-to-real-world gap in robotics by learning a deep inverse dynamics model that maps desired next states (from simulation) to appropriate real-world actions. The approach is evaluated against baselines like output error control and Gaussian dynamics adaptation.

0 favorites 0 likes
#deep-learning

Infrastructure for deep learning

OpenAI Blog · 2016-08-29 Cached

OpenAI shares their deep learning infrastructure approach and open-sources kubernetes-ec2-autoscaler, a batch-optimized scaling manager for Kubernetes, emphasizing how infrastructure quality multiplies research progress.

0 favorites 0 likes
#deep-learning

Team update

OpenAI Blog · 2016-05-25 Cached

OpenAI announces a new cohort of team members joining the organization, including researchers, engineers, and designers with backgrounds in deep learning, competitive programming, robotics, and AI safety.

0 favorites 0 likes
#deep-learning

OpenAI Gym Beta

OpenAI Blog · 2016-04-27 Cached

OpenAI releases OpenAI Gym, a public beta toolkit for developing and comparing reinforcement learning algorithms with a growing suite of environments and a platform for reproducible research. The toolkit aims to standardize RL benchmarks and address the lack of diverse, easy-to-use environments for the research community.

0 favorites 0 likes
#deep-learning

Team++

OpenAI Blog · 2016-03-31 Cached

OpenAI announces the arrival of several prominent machine learning researchers and engineers, including Ian Goodfellow, Alec Radford, and Yura Burda, joining the team in recent months. The announcement highlights the diverse expertise and notable contributions of new hires spanning generative modeling, reinforcement learning, and deep learning.

0 favorites 0 likes
#deep-learning

Weight normalization: A simple reparameterization to accelerate training of deep neural networks

OpenAI Blog · 2016-02-25 Cached

OpenAI presents weight normalization, a reparameterization technique that decouples weight vector length from direction to improve neural network training convergence and computational efficiency without introducing minibatch dependencies, making it suitable for RNNs and noise-sensitive applications.

0 favorites 0 likes
← Previous
← Back to home

Submit Feedback