distributed-training

Tag

Cards List
#distributed-training

Decoupled DiLoCo: A new frontier for resilient, distributed AI training

Google DeepMind Blog · 2026-04-22 Cached

DeepMind introduces Decoupled DiLoCo, a new distributed AI training architecture that enables resilient, low-bandwidth training of large models across globally dispersed data centers by isolating hardware failures.

0 favorites 0 likes
#distributed-training

ResBM: a new transformer-based architecture for low-bandwidth pipeline-parallel training, achieving 128× activation compression [R]

Reddit r/MachineLearning · 2026-04-16

ResBM introduces a transformer-based architecture with residual encoder-decoder bottlenecks for pipeline-parallel training, achieving 128× activation compression while maintaining convergence. The work advances decentralized, internet-grade distributed training by reducing inter-stage communication overhead.

0 favorites 0 likes
#distributed-training

Keep the Tokens Flowing: Lessons from 16 Open-Source RL Libraries

Hugging Face Blog · 2026-03-10 Cached

Hugging Face publishes a comprehensive analysis of 16 open-source reinforcement learning libraries, examining architectural patterns for asynchronous RL training and presenting design lessons for TRL's async trainer to address generation bottlenecks and weight synchronization challenges.

0 favorites 0 likes
#distributed-training

Ulysses Sequence Parallelism: Training with Million-Token Contexts

Hugging Face Blog · 2026-03-09 Cached

Ulysses Sequence Parallelism is a technique for training LLMs with million-token contexts by distributing sequence chunks across GPUs, reducing memory requirements and enabling efficient long-context training. It integrates with HuggingFace Accelerate, Transformers Trainer, and TRL, with support for Flash Attention and DeepSpeed ZeRO.

0 favorites 0 likes
#distributed-training

Techniques for training large neural networks

OpenAI Blog · 2022-06-09 Cached

OpenAI presents comprehensive techniques for training large neural networks across distributed GPU clusters, covering data parallelism, pipeline parallelism, tensor parallelism, and mixture-of-experts approaches to overcome engineering and scalability challenges.

0 favorites 0 likes
#distributed-training

PyTorch Distributed: Experiences on Accelerating Data Parallel Training

Papers with Code Trending · 2020-06-28 Cached

This paper details the design and optimization of PyTorch's distributed data parallel module, highlighting techniques like gradient bucketing and computation-communication overlap that enable near-linear scalability across 256 GPUs.

0 favorites 0 likes
← Back to home

Submit Feedback