@WinForKakei: Let me use Tencent as an example. Tencent's 2025 capex is even lower than guidance. As clearly stated in last year's earnings call, this is because they couldn't buy NVIDIA GPUs (due to AI chip supply constraints) and were unwilling to buy domestic chips. Of course, they compromised this year and have started ordering Kunlun chips. Actually, Pony Ma is not as Zen or content with being a latecomer as people say...
Summary
The article discusses Tencent's AI capex constraints due to NVIDIA chip shortages and its recent shift to using Kunlun chips, analyzing the company's valuation and strategic positioning in the AI landscape.
Similar Articles
@dongxi_nlp: A very valuable article, the last 6 takeaways are worth pondering. Among them, the last two: 5. The data industry is far from developed. Anthropic and OpenAI spend over $10 million on a single environment, while Chinese AI labs have a 'build rather than buy' mentality. 6. Countless...
The article summarizes the current state of the AI data industry, pointing out that the data industry is not yet mature. Anthropic and OpenAI spend over $10 million on a single environment, while Chinese AI labs tend to build rather than buy. In addition, many labs have access to Huawei chips but still crave more Nvidia chips.
@CNET: From Nvidia GTC 2026, CEO Jensen Huang talks about investment in AI Natives
Supermicro and NVIDIA unveil turnkey “AI Factory” reference architectures combining Blackwell GPUs, certified servers, networking, storage and deployment services to let enterprises spin up cluster-scale AI infrastructure faster.
The AI Gold Rush Just Entered Its Most Dangerous Phase
Google is aggressively challenging Nvidia’s AI chip dominance by opening its TPUs to external customers and targeting the inference market, potentially reshaping the global AI economy.
@CNET: From the Nvidia GTC Keynote, CEO Jensen Huang talks about the inference inflection point we're at.
NVIDIA CEO Jensen Huang highlighted an inflection point in AI inference during the GTC keynote, while Supermicro is partnering with NVIDIA to deliver turnkey 'AI Factory' infrastructure solutions built around the Blackwell platform.
@LinQingV: When exploring LLM inference chip architectures previously, I reviewed the architectures of the four major AI inference ASIC companies: Groq, SambaNova, Tenstorrent, and Cerebras. While the first three have different emphases, their underlying logic falls within the same framework: large on-chip SRAM + dataflow architecture + deterministic scheduling...
The article analyzes the AI inference ASIC architectures of Groq, SambaNova, Tenstorrent, and Cerebras, highlighting Cerebras's unique wafer-scale engine design. It discusses the benefits of deterministic latency and high bandwidth for LLM inference, while noting challenges like yield, cost, and KV cache bottlenecks.