CausalCine: Real-Time Autoregressive Generation for Multi-Shot Video Narratives
Summary
CausalCine is a new academic framework for real-time, interactive multi-shot video generation that uses causal modeling and dynamic memory routing to improve cross-shot coherence in autoregressive models.
View Cached Full Text
Cached at: 05/13/26, 04:12 AM
Paper page - CausalCine: Real-Time Autoregressive Generation for Multi-Shot Video Narratives
Source: https://huggingface.co/papers/2605.12496 Authors:
,
,
,
,
,
,
,
,
,
,
,
,
Abstract
CausalCine enables interactive, multi-shot video generation by addressing limitations of autoregressive models through causal modeling, dynamic memory routing, and real-time distillation techniques.
Autoregressive video generationaims at real-time, open-ended synthesis. Yet, cinematic storytelling is not merely the endless extension of a single scene; it requires progressing through evolving events, viewpoint shifts, and discrete shot boundaries. Existing autoregressive models often struggle in this setting. Trained primarily for short-horizon continuation, they treat long sequences as extended single shots, inevitably suffering from motion stagnation and semantic drift during long rollouts. To bridge this gap, we introduce CausalCine, an interactive autoregressive framework that transformsmulti-shot video generationinto an online directing process. CausalCine generates causally across shot changes, accepts dynamic prompts on the fly, and reuses context without regenerating previous shots. To achieve this, we first train acausal base modelon native multi-shot sequences to learn complex shot transitions prior to acceleration. We then proposeContent-Aware Memory Routing(CAMR), which dynamically retrieves historical KV entries according toattention-based relevance scoresrather than temporal proximity, preservingcross-shot coherenceunder bounded active memory. Finally, we distill thecausal base modelinto a few-step generator for real-timeinteractive generation. Extensive experiments demonstrate that CausalCine significantly outperforms autoregressive baselines and approaches the capability of bidirectional models while unlocking the streaming interactivity of causal generation. Demo available at https://yihao-meng.github.io/CausalCine/
View arXiv pageView PDFProject pageAdd to collection
Get this paper in your agent:
hf papers read 2605\.12496
Don’t have the latest CLI?curl \-LsSf https://hf\.co/cli/install\.sh \| bash
Models citing this paper0
No model linking this paper
Cite arxiv.org/abs/2605.12496 in a model README.md to link it from this page.
Datasets citing this paper0
No dataset linking this paper
Cite arxiv.org/abs/2605.12496 in a dataset README.md to link it from this page.
Spaces citing this paper0
No Space linking this paper
Cite arxiv.org/abs/2605.12496 in a Space README.md to link it from this page.
Collections including this paper0
No Collection including this paper
Add this paper to acollectionto link it from this page.
Similar Articles
Causal Forcing++: Scalable Few-Step Autoregressive Diffusion Distillation for Real-Time Interactive Video Generation
Causal Forcing++ presents a novel causal consistency distillation method for frame-wise autoregressive video generation, achieving state-of-the-art quality with reduced latency and training cost.
MuSS: A Large-Scale Dataset and Cinematic Narrative Benchmark for Multi-Shot Subject-to-Video Generation
MuSS introduces a large-scale dataset and benchmark for multi-shot subject-to-video generation, addressing narrative logic and copy-paste issues in cinematic storytelling.
Experimenting with storyboard-planned AI cinematics instead of single-prompt generation
Explores a storyboard-planned approach for AI cinematics that builds sequence structure before generating shots individually, resulting in more coherent video compared to single-prompt generation, while noting current weaknesses like identity drift and interaction physics.
Long Video Generation (4 minute read)
The article introduces A²RD, a novel architecture for generating consistent long videos using agentic autoregressive diffusion. It proposes a Retrieve–Synthesize–Refine–Update cycle and a new benchmark, LVBench-C, to address semantic drift in long-horizon video synthesis.
A^2RD: Agentic Autoregressive Diffusion for Long Video Consistency
A^2RD is a new paper introducing an Agentic Autoregressive Diffusion architecture for long video synthesis, achieving improved consistency and narrative coherence through a closed-loop self-improvement process.