autoregressive-models

Tag

Cards List
#autoregressive-models

When to Think, When to Speak: Learning Disclosure Policies for LLM Reasoning

Hugging Face Daily Papers · 4d ago Cached

This paper introduces Side-by-Side Interleaved Reasoning, a method for controlling disclosure timing in autoregressive models to improve accuracy and efficiency. It demonstrates improved performance on benchmarks using Qwen3 models by interleaving private reasoning with partial disclosures.

0 favorites 0 likes
#autoregressive-models

Speculative Decoding for Autoregressive Video Generation

Hugging Face Daily Papers · 2026-04-19 Cached

SDVG adapts speculative decoding to autoregressive video diffusion, using an image-quality router to achieve up to 2.09× speed-up with 95.7% quality retention on MovieGenVideoBench.

0 favorites 0 likes
#autoregressive-models

(1D) Ordered Tokens Enable Efficient Test-Time Search

Hugging Face Daily Papers · 2026-04-16 Cached

This paper investigates how 1D coarse-to-fine token structures in autoregressive models improve test-time search efficiency compared to classical 2D grid tokenization. The authors show that such ordered tokens enable better test-time scaling and even training-free text-to-image generation when guided by image-text verifiers.

0 favorites 0 likes
#autoregressive-models

Efficient training of language models to fill in the middle

OpenAI Blog · 2022-07-28 Cached

OpenAI presents a simple data augmentation technique that enables autoregressive language models to perform fill-in-the-middle (FIM) text generation without harming left-to-right performance, with extensive ablations and best practices provided for training such models.

0 favorites 0 likes
#autoregressive-models

Domain randomization and generative models for robotic grasping

OpenAI Blog · 2017-10-17 Cached

Researchers explore a data generation pipeline using domain randomization and procedurally generated objects to train a deep neural network for robotic grasp planning. The proposed autoregressive model achieves >90% success on unseen objects in simulation and 80% in the real world, despite being trained only on random simulated objects.

0 favorites 0 likes
#autoregressive-models

Variational lossy autoencoder

OpenAI Blog · 2016-11-08 Cached

OpenAI researchers present a Variational Lossy Autoencoder (VLAE) that combines VAEs with neural autoregressive models (RNN, MADE, PixelRNN/CNN) to learn controllable global representations, achieving state-of-the-art results on MNIST, OMNIGLOT, and Caltech-101 Silhouettes density estimation tasks.

0 favorites 0 likes
← Back to home

Submit Feedback