D4RT: Teaching AI to see the world in four dimensions

Google DeepMind Blog Models

Summary

DeepMind introduces D4RT, a unified AI model for dynamic 4D scene reconstruction and tracking that is up to 300x more efficient than previous methods. The model uses a query-based Transformer architecture to solve complex spatial and temporal tasks for robotics and AR applications.

D4RT: Unified, efficient 4D reconstruction and tracking up to 300x faster than prior methods.
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/08/26, 09:22 AM

# D4RT: Teaching AI to see the world in four dimensions Source: [https://deepmind.google/blog/d4rt-teaching-ai-to-see-the-world-in-four-dimensions/](https://deepmind.google/blog/d4rt-teaching-ai-to-see-the-world-in-four-dimensions/) January 22, 2026Research Introducing D4RT, a unified AI model for 4D scene reconstruction and tracking across space and time\. Anytime we look at the world, we perform an extraordinary feat of memory and prediction\. We see and understand things as they are at a given moment in time, as they were a moment ago, and how they are going to be in the moment to follow\. Our mental model of the world maintains a persistent representation of reality and we use that model to draw intuitive conclusions about the causal relationship between the past, present and future\. To help machines see the world more like we do, we can equip them with cameras, but that only solves the problem of input\. To make sense of this input, computers must solve a complex, inverse problem: taking a video — which is a sequence of flat 2D projections — and recovering or understanding the rich, volumetric 3D world, in motion\. Today, we are introducing[D4RT \(Dynamic 4D Reconstruction and Tracking\)](https://d4rt-paper.github.io/), a new AI model that unifies dynamic scene reconstruction into a single, efficient framework, bringing us closer to the next frontier of artificial intelligence: total perception of our dynamic reality\. ## The Challenge of the Fourth Dimension In order for it to understand a dynamic scene captured on a 2D video, an AI model must track every pixel of every object as it moves through the three dimensions of space and the fourth dimension of time\. In addition, it must disentangle this motion from the motion of the camera, maintaining a coherent representation even when objects move behind one another or leave the frame entirely\. Traditionally, capturing this level of geometry and motion from 2D videos requires computationally intensive processes or a patchwork of specialized AI models — some for depth, others for movement or camera angles — resulting in AI reconstructions that are slow and fragmented\. D4RT’s simplified architecture and novel query mechanism place it at the forefront of 4D reconstruction while being up to 300x more efficient than previous methods — fast enough for real\-time applications in robotics, augmented reality, and more\. ## How D4RT Works: A Query\-Based Approach D4RT operates as a unified encoder\-decoder Transformer architecture\. The encoder first processes the input video into a compressed representation of the scene’s geometry and motion\. Unlike older systems that employed separate modules for different tasks, D4RT calculates only what it needs using a flexible querying mechanism centered around a single, fundamental question: "Where is**a given pixel**from the video located**in 3D space**at an arbitrary**time**, as viewed from a**chosen camera**?" Building on[our prior work](https://srt-paper.github.io/), a lightweight decoder then queries this representation to answer specific instances of the posed question\. Because queries are independent, they can be processed in parallel on modern AI hardware\. This makes D4RT extremely fast and scalable, whether it’s tracking just a few points or reconstructing an entire scene\. D4RT combines a powerful encoder that builds a rich, global understanding of the video, and a lightweight decoder that answers thousands of queries in parallel\. By asking specific questions — identifying where a source pixel is located at a target time and camera view — the model efficiently solves diverse tasks like tracking, depth estimation, and pose estimation through a single, flexible interface\. ## Capabilities: Fast, Accurate 4D Understanding With this flexible formulation, a wide variety of 4D tasks can now be solved by the model, including: - **Point Tracking**: By querying a pixel's location across different time steps, D4RT can predict its 3D trajectory\. Importantly, an object need not be visible on other frames of the video for the model to make a prediction\. - **Point Cloud Reconstruction**: By freezing time and the camera viewpoint, D4RT can directly generate the complete 3D structure of a scene, eliminating extra steps such as separate camera estimation or per\-video iterative optimization\. - **Camera Pose Estimation**: By generating and aligning 3D snapshots of a single moment from different viewpoints, D4RT can easily recover the camera's trajectory\. As detailed in the[underlying technical report](https://arxiv.org/abs/2512.08924), D4RT outperforms previous methods across a wide spectrum of 4D reconstruction tasks\. Qualitative comparisons show that while other methods struggle with dynamic objects — often duplicating them or failing to reconstruct them entirely — D4RT maintains a solid, continuous understanding of the moving world\. Crucially, D4RT’s precision does not come at the expense of efficiency\. In testing, it performed 18x to 300x faster than the previous state of the art\. For example, D4RT processed a one\-minute video in roughly five seconds on a single TPU chip\. Previous state\-of\-the\-art methods could take up to ten minutes for the same task — an improvement of 120x\. ## Downstream Applications D4RT demonstrates that we don't need to choose between accuracy and efficiency in 4D reconstruction\. Its flexible, query\-based system can capture our dynamic world in real\-time, paving the way for the next generation of spatial computing\. This includes: - **Robotics**: Robots need to navigate dynamic environments populated by moving people and objects\. D4RT can provide the spatial awareness required for safe navigation and dextrous manipulation\. - **Augmented Reality \(AR\)**: For AR glasses to overlay digital objects onto the real world, they need an instant, low\-latency understanding of a scene’s geometry\. D4RT’s efficiency contributes to making on\-device deployment a tangible reality\. - **World Models**: By effectively disentangling camera motion, object motion, and static geometry, D4RT brings us a step closer to AI that possesses a true “world model” of physical reality — a necessary step on the path to AGI\. We're continuing to explore the model’s capabilities and potential for applications across robotics, augmented reality, and beyond\. ### Gemini Robotics 1\.5 brings AI agents into the physical world ### Introducing Veo 3\.1 and advanced creative capabilities ### Genie 3: A new frontier for world models

Similar Articles

4DThinker: Thinking with 4D Imagery for Dynamic Spatial Understanding

Hugging Face Daily Papers

4DThinker is a new framework that enables vision-language models to perform dynamic spatial reasoning using 4D latent mental imagery. The paper introduces scalable data generation and novel fine-tuning methods, including 4D Reinforcement Learning, to improve model performance on complex dynamic tasks.

Teaching AI to see the world more like we do

Google DeepMind Blog

Google DeepMind published a paper in Nature detailing a method to align AI visual representations with human cognitive structures, improving model robustness and reliability.

Mind DeepResearch Technical Report

Hugging Face Daily Papers

MindDR is a multi-agent deep research framework using a three-agent architecture (Planning, DeepSearch, Report) and a four-stage training pipeline, achieving competitive performance with ~30B-parameter models on multiple benchmarks. Developed by Li Auto and deployed as an online product, it also introduces MindDR Bench, a 500-query Chinese benchmark for evaluating deep research capabilities.

Mapping, modeling, and understanding nature with AI

Google DeepMind Blog

DeepMind announces new AI research applications for conservation, including a high-resolution deforestation risk model, species distribution mapping using Graph Neural Networks, and updates to the Perch bioacoustics model.