@hongming731: Alibaba's article on organizational R&D in the AI Native era is well worth reading. It addresses a critical foundational issue: for the past two millennia, organizational structures have been built around human limitations. Humans forget, get tired, misunderstand, and have emotions. The number of people one can stably collaborate with and manage is limited, and information inevitably degrades as it passes between hierarchies...
Alibaba released insights on organizational R&D in the AI Native era, pointing out that traditional organizational structures need to shift from accommodating human limitations to adapting to the efficient execution of AI Agents. The article emphasizes that the core bottleneck of AI transformation lies in outdated information formats; implicit experience must be transformed into AI-understandable infrastructure, while preserving the human role in innovation and cultural building.
Alibaba's article on organizational R&D in the AI Native era is well worth reading. It addresses a critical foundational issue: for the past two millennia, organizational structures have been built around human limitations. Humans forget, get tired, misunderstand, and have emotions. The number of people one can stably collaborate with and manage is limited, and information inevitably degrades as it passes between hierarchies. This is why organizations need reporting lines, departmental boundaries, managerial roles, requirement reviews, process approvals, and various coordination mechanisms. Many management systems we take for granted are not essentially advanced designs, but compromises to human cognitive bandwidth.
With the entry of AI Agents into organizations, this premise begins to loosen. Agents are not ordinary tools. While ordinary tools merely extend human hands and feet, Agents participate in understanding, execution, system invocation, and delivering results. They do not get tired, have no emotions, incur no traditional communication losses, and have almost no context-switching costs. Consequently, many structures in legacy organizations designed around humans are being re-examined.
This does not mean humans will be replaced immediately. A more accurate statement is that areas where organizations have long relied on humans are now being exposed.
Many systems function not because they are truly clear, complete, and structured, but because humans have done too much implicit patching in between. If requirements are incomplete, meetings can be held for clarification. If interface agreements are inconsistent, acquaintances can be consulted for confirmation. If code lacks documentation, experience can be used to guess. If business rules are hidden in the minds of senior employees, they can be filled in through manual communication. These actions are so common that we forget they are costs in themselves.
When AI takes over more execution tasks, the problem becomes acute. AI requires clear context, stable interfaces, executable tests, complete documentation, explicit permissions, and traceable results. Traditional systems do not leave these entry points for AI, so employees instead become human middleware: copying data from systems, pasting it to AI, and then moving the AI output back into the systems. It looks like using AI, but it is actually using humans to compensate for the system's unfriendliness toward AI.
Therefore, the core bottleneck of AI Native transformation is often not insufficient model capability, but rather the organization's outdated information format. The truly important work is to transform implicit experience, processes, standards, and judgments into infrastructure that AI can understand, invoke, and verify. The 'Harness' mentioned in the article can be understood as the underlying environment that allows Agents to truly do their jobs. It includes testing, documentation, permissions, logging, evaluation, tool interfaces, and incident response. It is not conspicuous, but it is the capital for an organization's future speed.
This also explains why senior engineers and architects will become even more important. In the past, high-value talent might have been demonstrated by personally solving complex problems; now, higher leverage comes from defining how the system solves problems. They need to write domain experience as rules, deposit failure patterns as tests, turn judgment criteria into reusable evaluations, and transform intuitive experience into executable processes. A good architect is not just someone who writes code, but someone who designs the working environment and boundaries of action for a group of Agents.
At the same time, management will not simply disappear. What will disappear is a large amount of information-transmission, coordination, and reporting-type management work. Strategic communication, progress aggregation, resource coordination, and daily decision-making will increasingly be handled by systems. However, human motivation, coaching, conflict resolution, identity settlement, and cultural building still require human involvement. More importantly, the transformation itself will bring real anxiety. When employees distill their experience into organizational assets, they will naturally worry about being replaced. This problem cannot be solved by slogans; it must be caught by clear role transitions, benefit sharing, and evaluation mechanisms.
Another point in the article worthy of caution is not to push all work toward extreme transparency and complete structuring. Execution work is suitable for transparency, for exposing failures quickly, and for reducing defensive self-protection. Innovative work needs some protected space. Many truly valuable ideas are fragile, rough, and counter-consensual at first. If exposed too early to unified evaluation and public scrutiny, they are easily worn down. AI is very good at execution and optimization, but it lacks the obsession to grind through a single problem for months. Human productive self remains the most important fuel for innovation.
So, the mature form of an AI Native organization is likely not a colder, machine-like company, but one where two layers coexist: the bottom layer is highly structured, allowing AI to execute safely, stably, and at high speed; the top layer remains sufficiently open and loose, allowing humans to raise questions, form judgments, take risks, and protect nascent ideas.
The final inspiration this article gives me is that AI's biggest change to organizations is not simply about cost reduction and efficiency increase. It forces organizations to answer a deeper question: Can your experience be deposited? Can your processes be invoked? Can your judgments be verified? Is your system truly clear? If the answer is no, AI will only amplify chaos. If the answer is yes, the organization will gain a new adaptive speed.
The OpenCLI project proposes the Agent-native design concept, making the AI agent the CLI's primary user, with all capability design measured by its improvement to agent success rates.
The article shares insights on entrepreneurial dividends in the AI era, emphasizing that understanding industry and production is more critical than mastering AI technology. Companies prioritize actual problem-solving capabilities over the models themselves.
This article summarizes Karpathy’s core points at the Sequoia Ascent conference, highlighting that AI is a paradigm shift restructuring workflows rather than merely an acceleration tool. It introduces the concept of a "jagged edge" for model capabilities based on verifiability and economic viability, and predicts that future software will evolve into an agent-native architecture where LLMs serve as the logic layer and traditional code functions as sensors and actuators.
The article summarizes the current state of the AI data industry, pointing out that the data industry is not yet mature. Anthropic and OpenAI spend over $10 million on a single environment, while Chinese AI labs tend to build rather than buy. In addition, many labs have access to Huawei chips but still crave more Nvidia chips.
This article explores the importance of technical writing in the AI era, citing the case of Anthropic employee @trq212 who achieved millions of page views through his 'plant first, harvest later' writing methodology, emphasizing the value of sharing real experiences and maintaining a personal voice.