@k1rallik: NVIDIA IS LITERALLY GIVING AWAY FREE AI INFERENCE I literally set it up in 5 minutes and couldn't believe it was free D…
Summary
NVIDIA offers free AI inference via DGX Cloud with OpenAI-compatible API for popular models like DeepSeek, MiniMax, Kimi, GLM, and Llama, claimable in 5 minutes.
Similar Articles
@dhruvtwt_: Why is no one talking about this? @nvidia is offering around 80 AI models via hosted APIs absolutely for free. You get …
Nvidia quietly provides ~80 free hosted AI model APIs including MiniMax M2.7, GLM 5.1, Kimi 2.5, DeepSeek 3.2, GPT-OSS-120B, ready to integrate with popular dev tools like OpenClaude and Zed IDE.
@cyrilXBT: CHINA JUST BUILT AN AI MODEL THAT IS COMPETING WITH OPENAI AND ANTHROPIC AT A FRACTION OF THE COST. And someone just dr…
DeepSeek, a Chinese AI model built by a quant hedge fund, is reportedly competing with GPT-4 level performance at roughly 5% of the training cost, causing significant market disruption including a $600B drop in NVIDIA's market cap. A free 1 hour 50 minute course has been released teaching users how to leverage DeepSeek V4 locally and via API.
Advancing Open Source AI, NVIDIA Donates Dynamic Resource Allocation Driver for GPUs to Kubernetes Community
NVIDIA is donating its Dynamic Resource Allocation (DRA) Driver for GPUs to the Cloud Native Computing Foundation (CNCF) and Kubernetes community, moving it from vendor-governed to community-owned. The donation aims to simplify GPU resource management in Kubernetes for AI workloads and includes GPU support for Kata Containers through collaboration with CNCF's Confidential Containers community.
@CNET: From the Nvidia GTC Keynote, CEO Jensen Huang talks about the inference inflection point we're at.
NVIDIA CEO Jensen Huang highlighted an inflection point in AI inference during the GTC keynote, while Supermicro is partnering with NVIDIA to deliver turnkey 'AI Factory' infrastructure solutions built around the Blackwell platform.
@CNET: From Nvidia GTC 2026, CEO Jensen Huang talks about investment in AI Natives
Supermicro and NVIDIA unveil turnkey “AI Factory” reference architectures combining Blackwell GPUs, certified servers, networking, storage and deployment services to let enterprises spin up cluster-scale AI infrastructure faster.