@Valley101_Qian: Congrats to Yuandong @tydsh. At the end of our previous interview, the "new direction" he mentioned was officially announced today: neolab Recursive_SI, with $650 million in funding and a valuation of $4.65 billion. Looking forward to more research freedom and research taste in the industry...

X AI KOLs Timeline News

Summary

After being laid off from Meta, Yuandong announced a new direction, raising $650 million to found neolab Recursive_SI with a valuation of $4.65 billion. In an interview, he shared insights on AI trends, LLM limitations, reinforcement learning, and research freedom.

Congrats to Yuandong @tydsh. At the end of our previous interview, the "new direction" he mentioned was officially announced today: neolab Recursive_SI, with $650 million in funding and a valuation of $4.65 billion. Looking forward to more research freedom and research taste in the industry~ https://t.co/16Yegkrs6y
Original Article
View Cached Full Text

Cached at: 05/14/26, 04:30 AM

Congratulations to Yuandong @tydsh – at the end of our previous interview, the “new direction” he mentioned has now been officially announced: neolab Recursive_SI, raising $650M at a $4.65B valuation.

Looking forward to more research freedom and research taste in the industry. https://t.co/16Yegkrs6y


TL;DR: Former FAIR Research Director Yuandong Tian, after being laid off from Meta, shared deep insights on AI industry trends, the limitations of LLMs and Scaling Laws, the advantages of reinforcement learning, and the balance between research freedom and engineering.

Layoffs and Mindset: Prepared and Going with the Flow

Before being laid off from Meta AI, Yuandong had already received a job offer and had expressed his intention to leave to his superiors. Having worked at Meta for over a decade, he saw the layoff as an opportunity to “step out and see.” Though the layoff (~600 people) was shocking, it didn’t catch him off guard. He observed an industry trend: “fewer people will be doing AI, but more people will be using AI to explore other things” — as automation increases, repetitive engineering roles shrink while frontier research and vertical application roles grow.

Views on the LLM Path: Interesting, but Not Necessarily Correct

Yuandong believes the large language model (LLM) path is “very interesting” but not necessarily the right direction. The core issue is data efficiency: humans learn roughly 10 billion tokens of text in a lifetime, while LLMs easily train on 10 trillion or even 30 trillion — a 1000x gap. Human scientists can make unique discoveries with minimal data, whereas large models require massive samples. He speculates that future learning may no longer be dominated by gradient descent; perhaps a better learning paradigm will emerge.

The Pessimistic Future of Scaling Law

Scaling Law, in his view, represents a “pessimistic future”: exponential increases in data and compute yield only linear performance gains. Following this path would consume all the earth’s resources. He calls for more efficient methods to develop intelligence, but also acknowledges that even if current models stagnate, there remain many application opportunities over the next three to five years.

The Unique Value of Reinforcement Learning (RL)

Yuandong has studied RL for a long time. He believes its greatest advantage is active learning: data generated through search processes is of higher quality than passively fed data. Compared to supervised fine-tuning (SFT), RL excels especially in reasoning tasks and can produce generalization, while SFT tends to lead to memorization and performance degradation. He emphasizes that RL is essentially a different data collection approach (learning while searching), and its core is changing the data distribution rather than differences in objective functions.

Open Source vs. Closed Source: Depends on Purpose

He predicts that open-source models will not disappear, but their form will diverge: if models serve as platforms or tool standards, open source has a natural advantage; for personalized search or recommendation, closed source may be preferred. At the frontier, open source can hardly compete directly with closed source, but in vertical domains and specific tasks, open source still has plenty of opportunities — every company and every model can have its own purpose.

AGI and Human Insight

Yuandong has used GPT-5 to co-author papers and found that without domain knowledge, the model’s planning lacks originality. Top-tier insight still requires human guidance. He draws an analogy to autonomous driving: early progress was fast, but as harder problems emerge, high-quality data becomes scarce, and model training bottlenecks appear. Humans’ ability to deeply mine insights from limited samples currently far surpasses models.

Regrets and Gains from FAIR: Balancing Engineering and Research

Reflecting on his FAIR experience: in early years he was criticized for doing too much engineering (“research scientists shouldn’t just do engineering”), so he shifted to research. But now, engineering skills are more welcomed. He believes the optimal state is to have both engineering and research abilities. His biggest gain was developing “research taste” after 2018 — the ability to set his own path and keep moving forward. This taste is more important than simply solving engineering problems.

Scarcity of AI Talent: Find Your Passion, Don’t Chase Hype

Yuandong argues that the AI industry cycle is extremely fast today — what’s hot now may be obsolete tomorrow. Instead of following market trends, do what you truly want to do, combined with a judgment of future usefulness. He emphasizes, “Don’t think about what’s scarcest, because the definition may change in two years.” For individuals, find your passion and stick with it; once discovered by the market, the payoff can be huge.

Idealized Research Lab: It Exists, but in Guerrilla Form

Responding to the question of whether an ideal research lab still exists: big companies are not monolithic; many small teams still enjoy research freedom. Even if FAIR becomes less research-oriented due to restructuring, there will still be other organizations or even startups offering space. Research itself is a process of “search,” and the future will look more like “guerrilla warfare” — decentralized, flexible, driven by people with ideals.

Next Steps: Undecided but Aiming High

As of the interview (less than a week after the layoff), Yuandong had not decided his next move. He hopes to find an opportunity that combines frontier research with engineering application, and to set an “impossible goal” and then work backward to find support. He looks forward to building a product that can empower both his own research and broadly benefit others.


Source: YouTube video (https://youtu.be/EsaUQNx59vA?si=zVsXbMeIAnYhBEo6)

Similar Articles

@shao__meng: Tian Yuandong (former Meta FAIR Director) officially announces new company as co-founder: Recursive @Recursive_SI Recursive's mission is to build Recursive Self-Improving Superintelligence (Recursive Self-Improving S…

X AI KOLs Timeline

Former Meta FAIR Director Tian Yuandong, together with several top AI scientists, officially announces new company Recursive, dedicated to building recursive self-improving superintelligence, and has secured over $650 million in funding with a valuation of approximately $4.65 billion.

@FinanceYF5: 6 months. Fewer than 30 people. $4.65 billion valuation. They have only one thing to do: let AI research how to improve itself. Former top researchers from OpenAI, DeepMind, and Meta have collectively left to found Recursive. Reported by the New York Times today. Why is this suddenly possible now?

X AI KOLs Following

Former top researchers from OpenAI, DeepMind, and Meta jointly founded the startup Recursive, focusing on letting AI self-improve. In just 6 months with fewer than 30 people, they achieved a valuation of $4.65 billion.

@FinanceYF5: In this RL environment company market map, Benchflow AI is the only one that: > Only raised an angel round > Has no YC or a16z accelerator background > Has a founding team with no PhDs, lab experience, or traditional academic credentials And is also a solo found…

X AI KOLs Following

Benchflow AI stands out in the RL environment company market map as the only solo-founder company with just angel funding, no YC/a16z backing, and a founding team with no academic credentials — yet it has published two top-tier research papers and received an eight-figure acquisition offer from a unicorn.

@Recursive_SI: https://x.com/Recursive_SI/status/2054490801972166898

X AI KOLs Following

Recursive, an AI startup founded by former research leaders from OpenAI, DeepMind, and others, emerged from stealth with a $650M funding round to develop recursively self-improving AI through open-ended scientific discovery, aiming for superintelligence.