NVIDIA is donating its Dynamic Resource Allocation (DRA) Driver for GPUs to the Cloud Native Computing Foundation (CNCF) and Kubernetes community, moving it from vendor-governed to community-owned. The donation aims to simplify GPU resource management in Kubernetes for AI workloads and includes GPU support for Kata Containers through collaboration with CNCF's Confidential Containers community.
<div id="bsf_rt_marker"></div><p><span style="font-weight: 400;">Artificial intelligence has rapidly emerged as one of the most critical workloads in modern computing.</span></p>
<p><span style="font-weight: 400;">For the vast majority of enterprises, this workload runs on Kubernetes, an open source platform that automates the deployment, scaling and management of containerized applications.</span></p>
<p><span style="font-weight: 400;">To help the global developer community manage high-performance AI infrastructure with greater transparency and efficiency, NVIDIA is donating a critical piece of software — the </span><a target="_blank" href="https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/dra-intro-install.html"><span style="font-weight: 400;">NVIDIA Dynamic Resource Allocation (DRA) Driver for GPUs</span></a><span style="font-weight: 400;"> — to the Cloud Native Computing Foundation (CNCF), a vendor-neutral organization dedicated to fostering and sustaining the cloud-native ecosystem. </span></p>
<p><span style="font-weight: 400;">Announced today at KubeCon Europe, CNCF’s flagship conference running this week in Amsterdam, the donation moves the driver from being vendor-governed to offering full community ownership under the Kubernetes project. This open environment encourages a wider circle of experts to contribute ideas, accelerate innovation and help ensure the technology stays aligned with the modern cloud landscape. </span></p>
<p><span style="font-weight: 400;">“NVIDIA’s deep collaboration with the Kubernetes and CNCF community to upstream the NVIDIA DRA Driver for GPUs marks a major milestone for open source Kubernetes and AI infrastructure,” </span><span style="font-weight: 400;">said Chris Aniszczyk, chief technology officer of CNCF. “</span><span style="font-weight: 400;">By aligning its hardware innovations with upstream Kubernetes and AI conformance efforts, NVIDIA is making high-performance GPU orchestration seamless and accessible to all.”</span></p>
<p><span style="font-weight: 400;">In addition, in collaboration with the CNCF’s Confidential Containers community, NVIDIA has introduced GPU support for Kata Containers, lightweight virtual machines that act like containers. This extends hardware acceleration into a stronger isolation, separating workloads for increased security and enabling AI workloads to run with enhanced protection so organizations can easily implement confidential computing to safeguard data.</span></p>
<h2><b>Simplifying AI Infrastructure</b></h2>
<p><span style="font-weight: 400;">Historically, managing the powerful GPUs that fuel AI within data centers required significant effort. </span></p>
<p><span style="font-weight: 400;">This contribution is designed to make high-performance computing more accessible. Key benefits for developers include:</span></p>
<ul>
<li style="font-weight: 400;" aria-level="1"><b>Improved Efficiency:</b><span style="font-weight: 400;"> The driver allows for smarter sharing of GPU resources, delivering effective use of computing power, with support of </span><a target="_blank" href="https://docs.nvidia.com/deploy/mps/index.html"><span style="font-weight: 400;">NVIDIA Multi-Process Service</span></a><span style="font-weight: 400;"> and </span><a target="_blank" href="https://www.nvidia.com/en-us/technologies/multi-instance-gpu/"><span style="font-weight: 400;">NVIDIA Multi-Instance GPU</span></a><span style="font-weight: 400;"> technologies.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Massive Scale:</b><span style="font-weight: 400;"> It provides native support for connecting systems together, including with </span><a target="_blank" href="https://developer.nvidia.com/blog/enabling-multi-node-nvlink-on-kubernetes-for-gb200-and-beyond/"><span style="font-weight: 400;">NVIDIA Multi-Node NVlink</span></a><span style="font-weight: 400;"> interconnect technology. This is essential for training massive AI models on </span><a target="_blank" href="https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/"><span style="font-weight: 400;">NVIDIA Grace Blackwell</span></a><span style="font-weight: 400;"> systems and next-generation AI infrastructure.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Flexibility:</b><span style="font-weight: 400;"> Developers can dynamically reconfigure their hardware to suit their needs, changing how resources are allocated on the fly.</span></li>
<li style="font-weight: 400;" aria-level="1"><b>Precision:</b><span style="font-weight: 400;"> The software supports fine-tuned requests, allowing users to ask for the specific computing power, memory settings or interconnect arrangement needed for their applications.</span></li>
</ul>
<h2><b>A Collaborative, Industry-Wide Effort</b></h2>
<p><span style="font-weight: 400;">NVIDIA is collaborating with industry leaders — including</span><span style="font-weight: 400;"> Amazon Web Services,</span> <a target="_blank" href="https://blogs.vmware.com/cloud-foundation/2026/03/23/strengthening-the-cloud-native-ecosystem-through-upstream-collaboration/"><span style="font-weight: 400;">Broadcom</span></a><span style="font-weight: 400;">, </span><a target="_blank" href="https://ubuntu.com/blog/canonical-nvidia-kubecon-2026"><span style="font-weight: 400;">Canonical</span></a><span style="font-weight: 400;">,</span> <a target="_blank" href="https://cloud.google.com/blog/products/containers-kubernetes/gke-and-oss-innovation-at-kubecon-eu-2026"><span style="font-weight: 400;">Google Cloud</span></a><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Microsoft</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Nutanix</span><span style="font-weight: 400;">, </span><span style="font-weight: 400;">Red Hat</span><span style="font-weight: 400;"> and </span><a target="_blank" href="http://suse.com/c/the-power-of-community-for-enterprise-ai"><span style="font-weight: 400;">SUSE</span></a><span style="font-weight: 400;"> — to drive these features forward for the benefit of the entire cloud-native ecosystem.</span></p>
<p><span style="font-weight: 400;">“Open source will be at the core of every successful enterprise AI strategy, bringing standardization to the high-performance infrastructure components that fuel production AI workloads,”</span><span style="font-weight: 400;"> said Chris Wright, chief technology officer and senior vice president of global engineering at Red Hat</span><span style="font-weight: 400;">. “NVIDIA’s donation of the NVIDIA DRA Driver for GPUs helps to cement the role of open source in AI’s evolution, and we look forward to collaborating with NVIDIA and the broader community within the Kubernetes ecosystem.”</span></p>
<p><span style="font-weight: 400;">“Open source software and the communities that sustain it are a cornerstone of the infrastructure used for scientific computing and research,”</span><span style="font-weight: 400;"> said Ricardo Rocha, lead of platforms infrastructure at CERN</span><span style="font-weight: 400;">. “For organizations like CERN, where efficiently analyzing petabytes of data is essential to discovery, community-driven innovation helps accelerate the pace of science. NVIDIA’s donation of the DRA Driver strengthens the ecosystem researchers rely on to process data across both traditional scientific computing and emerging machine learning workloads.”</span></p>
<h2><b>Expanding the Open Source Horizon</b></h2>
<p><span style="font-weight: 400;">This donation is just part of NVIDIA’s broader initiatives to support the open source community. For example, </span><a target="_blank" href="https://github.com/NVIDIA/NVSentinel"><span style="font-weight: 400;">NVSentinel</span></a><span style="font-weight: 400;"> — a system for GPU fault remediation — and </span><a target="_blank" href="https://github.com/NVIDIA/aicr"><span style="font-weight: 400;">AI Cluster Runtime</span></a><span style="font-weight: 400;">, an agentic AI framework, were announced at GTC last week.</span></p>
<p><span style="font-weight: 400;">In addition, NVIDIA </span><a target="_blank" href="https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw"><span style="font-weight: 400;">announced at GTC new open source projects</span></a><span style="font-weight: 400;"> including the </span><a target="_blank" href="https://github.com/NVIDIA/NemoClaw"><span style="font-weight: 400;">NVIDIA NemoClaw</span></a><span style="font-weight: 400;"> reference stack and </span><a target="_blank" href="https://github.com/NVIDIA/OpenShell"><span style="font-weight: 400;">NVIDIA OpenShell</span></a><span style="font-weight: 400;"> runtime for securely running autonomous agents. OpenShell provides fine-grained programmable policy security and privacy controls, and natively integrates with Linux, eBPF and Kubernetes.</span></p>
<p><span style="font-weight: 400;">NVIDIA also today announced that its high-performance AI workload scheduler, the KAI Scheduler, has been onboarded as a CNCF Sandbox project — a key step toward fostering broader collaboration and ensuring the technology evolves alongside the needs of the wider cloud-native ecosystem. Developers and organizations can </span><a target="_blank" href="https://github.com/kai-scheduler/KAI-Scheduler"><span style="font-weight: 400;">use and contribute to the KAI Scheduler today</span></a><span style="font-weight: 400;">.</span></p>
<p><span style="font-weight: 400;">NVIDIA remains committed to actively maintaining and contributing to Kubernetes and CNCF projects to help meet the rigorous demands of enterprise AI customers. </span></p>
<p><span style="font-weight: 400;">In addition, following the release of </span><a target="_blank" href="https://github.com/ai-dynamo/dynamo"><span style="font-weight: 400;">NVIDIA Dynamo</span></a><span style="font-weight: 400;"> 1.0</span><span style="font-weight: 400;">, NVIDIA is expanding the Dynamo ecosystem with </span><a target="_blank" href="https://github.com/ai-dynamo/grove"><span style="font-weight: 400;">Grove</span></a><span style="font-weight: 400;">, an open source Kubernetes application programming interface for orchestrating AI workloads on GPU clusters. Grove, which enables developers to express complex inference systems in a single declarative resource, is being integrated with the llm-d inference stack for wider adoption in the Kubernetes community. </span></p>
<p><i><span style="font-weight: 400;">Developers and organizations can begin using and contributing to the </span></i><a target="_blank" href="https://github.com/NVIDIA/k8s-dra-driver-gpu"><i><span style="font-weight: 400;">NVIDIA DRA Driver today</span></i></a><i><span style="font-weight: 400;">.</span></i></p>
<p><i><span style="font-weight: 400;">Visit the </span></i><a target="_blank" href="https://www.nvidia.com/en-eu/events/kubecon-cloudnativecon-europe/"><i><span style="font-weight: 400;">NVIDIA booth at KubeCon</span></i></a><i><span style="font-weight: 400;"> to see live demos of this technology in action.</span></i></p>
# Advancing Open Source AI, NVIDIA Donates Dynamic Resource Allocation Driver for GPUs to Kubernetes Community
Source: [https://blogs.nvidia.com/blog/nvidia-at-kubecon-2026/](https://blogs.nvidia.com/blog/nvidia-at-kubecon-2026/)
Artificial intelligence has rapidly emerged as one of the most critical workloads in modern computing\.
For the vast majority of enterprises, this workload runs on Kubernetes, an open source platform that automates the deployment, scaling and management of containerized applications\.
To help the global developer community manage high\-performance AI infrastructure with greater transparency and efficiency, NVIDIA is donating a critical piece of software — the[NVIDIA Dynamic Resource Allocation \(DRA\) Driver for GPUs](https://docs.nvidia.com/datacenter/cloud-native/gpu-operator/latest/dra-intro-install.html)— to the Cloud Native Computing Foundation \(CNCF\), a vendor\-neutral organization dedicated to fostering and sustaining the cloud\-native ecosystem\.
Announced today at KubeCon Europe, CNCF’s flagship conference running this week in Amsterdam, the donation moves the driver from being vendor\-governed to offering full community ownership under the Kubernetes project\. This open environment encourages a wider circle of experts to contribute ideas, accelerate innovation and help ensure the technology stays aligned with the modern cloud landscape\.
“NVIDIA’s deep collaboration with the Kubernetes and CNCF community to upstream the NVIDIA DRA Driver for GPUs marks a major milestone for open source Kubernetes and AI infrastructure,”said Chris Aniszczyk, chief technology officer of CNCF\. “By aligning its hardware innovations with upstream Kubernetes and AI conformance efforts, NVIDIA is making high\-performance GPU orchestration seamless and accessible to all\.”
In addition, in collaboration with the CNCF’s Confidential Containers community, NVIDIA has introduced GPU support for Kata Containers, lightweight virtual machines that act like containers\. This extends hardware acceleration into a stronger isolation, separating workloads for increased security and enabling AI workloads to run with enhanced protection so organizations can easily implement confidential computing to safeguard data\.
## **Simplifying AI Infrastructure**
Historically, managing the powerful GPUs that fuel AI within data centers required significant effort\.
This contribution is designed to make high\-performance computing more accessible\. Key benefits for developers include:
- **Improved Efficiency:**The driver allows for smarter sharing of GPU resources, delivering effective use of computing power, with support of[NVIDIA Multi\-Process Service](https://docs.nvidia.com/deploy/mps/index.html)and[NVIDIA Multi\-Instance GPU](https://www.nvidia.com/en-us/technologies/multi-instance-gpu/)technologies\.
- **Massive Scale:**It provides native support for connecting systems together, including with[NVIDIA Multi\-Node NVlink](https://developer.nvidia.com/blog/enabling-multi-node-nvlink-on-kubernetes-for-gb200-and-beyond/)interconnect technology\. This is essential for training massive AI models on[NVIDIA Grace Blackwell](https://www.nvidia.com/en-us/data-center/technologies/blackwell-architecture/)systems and next\-generation AI infrastructure\.
- **Flexibility:**Developers can dynamically reconfigure their hardware to suit their needs, changing how resources are allocated on the fly\.
- **Precision:**The software supports fine\-tuned requests, allowing users to ask for the specific computing power, memory settings or interconnect arrangement needed for their applications\.
## **A Collaborative, Industry\-Wide Effort**
NVIDIA is collaborating with industry leaders — includingAmazon Web Services,[Broadcom](https://blogs.vmware.com/cloud-foundation/2026/03/23/strengthening-the-cloud-native-ecosystem-through-upstream-collaboration/),[Canonical](https://ubuntu.com/blog/canonical-nvidia-kubecon-2026),[Google Cloud](https://cloud.google.com/blog/products/containers-kubernetes/gke-and-oss-innovation-at-kubecon-eu-2026),Microsoft,Nutanix,Red Hatand[SUSE](http://suse.com/c/the-power-of-community-for-enterprise-ai)— to drive these features forward for the benefit of the entire cloud\-native ecosystem\.
“Open source will be at the core of every successful enterprise AI strategy, bringing standardization to the high\-performance infrastructure components that fuel production AI workloads,”said Chris Wright, chief technology officer and senior vice president of global engineering at Red Hat\. “NVIDIA’s donation of the NVIDIA DRA Driver for GPUs helps to cement the role of open source in AI’s evolution, and we look forward to collaborating with NVIDIA and the broader community within the Kubernetes ecosystem\.”
“Open source software and the communities that sustain it are a cornerstone of the infrastructure used for scientific computing and research,”said Ricardo Rocha, lead of platforms infrastructure at CERN\. “For organizations like CERN, where efficiently analyzing petabytes of data is essential to discovery, community\-driven innovation helps accelerate the pace of science\. NVIDIA’s donation of the DRA Driver strengthens the ecosystem researchers rely on to process data across both traditional scientific computing and emerging machine learning workloads\.”
## **Expanding the Open Source Horizon**
This donation is just part of NVIDIA’s broader initiatives to support the open source community\. For example,[NVSentinel](https://github.com/NVIDIA/NVSentinel)— a system for GPU fault remediation — and[AI Cluster Runtime](https://github.com/NVIDIA/aicr), an agentic AI framework, were announced at GTC last week\.
In addition, NVIDIA[announced at GTC new open source projects](https://nvidianews.nvidia.com/news/nvidia-announces-nemoclaw)including the[NVIDIA NemoClaw](https://github.com/NVIDIA/NemoClaw)reference stack and[NVIDIA OpenShell](https://github.com/NVIDIA/OpenShell)runtime for securely running autonomous agents\. OpenShell provides fine\-grained programmable policy security and privacy controls, and natively integrates with Linux, eBPF and Kubernetes\.
NVIDIA also today announced that its high\-performance AI workload scheduler, the KAI Scheduler, has been onboarded as a CNCF Sandbox project — a key step toward fostering broader collaboration and ensuring the technology evolves alongside the needs of the wider cloud\-native ecosystem\. Developers and organizations can[use and contribute to the KAI Scheduler today](https://github.com/kai-scheduler/KAI-Scheduler)\.
NVIDIA remains committed to actively maintaining and contributing to Kubernetes and CNCF projects to help meet the rigorous demands of enterprise AI customers\.
In addition, following the release of[NVIDIA Dynamo](https://github.com/ai-dynamo/dynamo)1\.0, NVIDIA is expanding the Dynamo ecosystem with[Grove](https://github.com/ai-dynamo/grove), an open source Kubernetes application programming interface for orchestrating AI workloads on GPU clusters\. Grove, which enables developers to express complex inference systems in a single declarative resource, is being integrated with the llm\-d inference stack for wider adoption in the Kubernetes community\.
*Developers and organizations can begin using and contributing to the*[*NVIDIA DRA Driver today*](https://github.com/NVIDIA/k8s-dra-driver-gpu)*\.*
*Visit the*[*NVIDIA booth at KubeCon*](https://www.nvidia.com/en-eu/events/kubecon-cloudnativecon-europe/)*to see live demos of this technology in action\.*
NVIDIA offers free AI inference via DGX Cloud with OpenAI-compatible API for popular models like DeepSeek, MiniMax, Kimi, GLM, and Llama, claimable in 5 minutes.
Nvidia quietly provides ~80 free hosted AI model APIs including MiniMax M2.7, GLM 5.1, Kimi 2.5, DeepSeek 3.2, GPT-OSS-120B, ready to integrate with popular dev tools like OpenClaude and Zed IDE.
OpenAI and NVIDIA announced a landmark strategic partnership to deploy at least 10 gigawatts of NVIDIA AI systems, with NVIDIA investing up to $100 billion as infrastructure is deployed starting in late 2026 using the Vera Rubin platform.
NVIDIA GTC 2026 keynote highlights the 20th anniversary of CUDA, introduces DLSS 5 with AI-powered neural rendering, and surveys NVIDIA's accelerated computing platforms across automotive, healthcare, robotics, and other sectors. CEO Jensen Huang projects $1 trillion in computing revenue from 2025-2027 driven by massive AI demand.
Supermicro and NVIDIA unveil turnkey “AI Factory” reference architectures combining Blackwell GPUs, certified servers, networking, storage and deployment services to let enterprises spin up cluster-scale AI infrastructure faster.