@WinForKakei: Let me use Tencent as an example. Tencent's 2025 capex is even lower than guidance. As clearly stated in last year's earnings call, this is because they couldn't buy NVIDIA GPUs (due to AI chip supply constraints) and were unwilling to buy domestic chips. Of course, they compromised this year and have started ordering Kunlun chips. Actually, Pony Ma is not as Zen or content with being a latecomer as people say...

X AI KOLs Following News

Summary

The article discusses Tencent's AI capex constraints due to NVIDIA chip shortages and its recent shift to using Kunlun chips, analyzing the company's valuation and strategic positioning in the AI landscape.

Let me use Tencent as an example. Tencent's 2025 capex is lower than guidance. As explained clearly in last year's earnings call, this was because they couldn't procure NVIDIA GPUs (due to AI chip supply constraints) and were unwilling to purchase domestic alternatives. Of course, they compromised this year and have begun ordering Kunlun chips. In reality, Pony Ma isn't as Zen or content with being a "late mover" as the public perceives. According to internal management and employees, Mr. Ma was quite anxious, only feeling somewhat relieved after the release of Hunyuan 3.0. At the previous trough, Tencent's core business valuation dropped to around 10x PE. At that time, it faced pressure from non-market factors such as the A4 paper regulatory incident, rampant rumors, and turning negative revenues and profits, leaving the future highly uncertain. In contrast, the current fundamentals are much clearer than in 2022: games are thriving both domestically and internationally, there is still massive room for growth in the monetization rate of Video Accounts, and advertising and cloud businesses align well with capex. The market's only concern is slightly falling behind on large model products that could potentially destroy value. By the way, the gap in large models is likely smaller than most people imagine; the fact that Luo Fuli could develop Mimo illustrates this point. Additionally, the CEO of Anthropic has publicly stated that other AI labs' capabilities lag by only about 1-3 months. Of course, Tencent is currently trading at more than 10x PE. If we value its core business at 10x PE for this year, that corresponds to 2.2 trillion RMB. Adding equity and cash (another 1.2 trillion RMB) discounted at 80% gives roughly 1 trillion RMB. A Tencent valuation of 3.2 trillion RMB corresponds to a stock price of 403. If the market trades at this price, I would be very happy to buy significantly more. #Tencent
Original Article

Similar Articles

@dongxi_nlp: A very valuable article, the last 6 takeaways are worth pondering. Among them, the last two: 5. The data industry is far from developed. Anthropic and OpenAI spend over $10 million on a single environment, while Chinese AI labs have a 'build rather than buy' mentality. 6. Countless...

X AI KOLs Timeline

The article summarizes the current state of the AI data industry, pointing out that the data industry is not yet mature. Anthropic and OpenAI spend over $10 million on a single environment, while Chinese AI labs tend to build rather than buy. In addition, many labs have access to Huawei chips but still crave more Nvidia chips.

@LinQingV: When exploring LLM inference chip architectures previously, I reviewed the architectures of the four major AI inference ASIC companies: Groq, SambaNova, Tenstorrent, and Cerebras. While the first three have different emphases, their underlying logic falls within the same framework: large on-chip SRAM + dataflow architecture + deterministic scheduling...

X AI KOLs Timeline

The article analyzes the AI inference ASIC architectures of Groq, SambaNova, Tenstorrent, and Cerebras, highlighting Cerebras's unique wafer-scale engine design. It discusses the benefits of deterministic latency and high bandwidth for LLM inference, while noting challenges like yield, cost, and KV cache bottlenecks.