Learning sparse neural networks through L₀ regularization

OpenAI Blog Papers

Summary

OpenAI proposes a practical L₀ regularization method for neural networks that encourages weights to become exactly zero during training, enabling network pruning for improved speed and generalization. The method uses stochastic gates and introduces the hard concrete distribution to make the non-differentiable L₀ norm optimization tractable via gradient descent.

No content available
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:56 PM

# Learning sparse neural networks through L₀ regularization Source: [https://openai.com/index/learning-sparse-neural-networks-through-l0-regularization/](https://openai.com/index/learning-sparse-neural-networks-through-l0-regularization/) ## Abstract We propose a practical method for L₀ norm regularization for neural networks: pruning the network during training by encouraging weights to become exactly zero\. Such regularization is interesting since \(1\) it can greatly speed up training and inference, and \(2\) it can improve generalization\. AIC and BIC, well\-known model selection criteria, are special cases of L₀ regularization\. However, since the L₀ norm of weights is non\-differentiable, we cannot incorporate it directly as a regularization term in the objective function\. We propose a solution through the inclusion of a collection of non\-negative stochastic gates, which collectively determine which weights to set to zero\. We show that, somewhat surprisingly, for certain distributions over the gates, the expected L₀ norm of the resulting gated weights is differentiable with respect to the distribution parameters\. We further propose the*hard concrete*distribution for the gates, which is obtained by "stretching" a binary concrete distribution and then transforming its samples with a hard\-sigmoid\. The parameters of the distribution over the gates can then be jointly optimized with the original network parameters\. As a result our method allows for straightforward and efficient learning of model structures with stochastic gradient descent and allows for conditional computation in a principled way\. We perform various experiments to demonstrate the effectiveness of the resulting approach and regularizer\.

Similar Articles

Understanding neural networks through sparse circuits

OpenAI Blog

OpenAI researchers present methods for training sparse neural networks that are easier to interpret by forcing most weights to zero, enabling the discovery of small, disentangled circuits that can explain model behavior while maintaining performance. This work aims to advance mechanistic interpretability as a complement to post-hoc analysis of dense networks and support AI safety goals.

JumpLoRA: Sparse Adapters for Continual Learning in Large Language Models

arXiv cs.CL

JumpLoRA introduces a novel sparse adapter framework for continual learning in LLMs using JumpReLU gating to dynamically isolate task parameters and prevent catastrophic forgetting. The method enhances LoRA-based approaches and outperforms state-of-the-art continual learning methods like ELLA.

Estimating worst case frontier risks of open weight LLMs

OpenAI Blog

OpenAI researchers study worst-case frontier risks of releasing open-weight LLMs through malicious fine-tuning (MFT) in biology and cybersecurity domains, finding that open-weight models underperform frontier closed-weight models and don't substantially advance harmful capabilities.

Accelerating LMO-Based Optimization via Implicit Gradient Transport

arXiv cs.LG

This paper proposes LMO-IGT, a new class of stochastic optimization methods that accelerates convergence using implicit gradient transport while maintaining a single-gradient-per-iteration structure. It introduces a unified theoretical framework and demonstrates improved performance over existing LMO-based optimizers like Muon.