@astaxie: 今天群里面讨论怎么样学习 Harness,Harness 工程我学习这两个: 1. https://github.com/walkinglabs/learn-harness-engineering… 通过这个了解每一个 Harness 的…
摘要
A project-based course repository on Harness Engineering for AI coding agents, covering environment setup, state management, verification, and control mechanisms to make AI coding agents work reliably. The course synthesizes best practices from OpenAI and Anthropic on building effective harnesses for long-running agents.
查看缓存全文
缓存时间: 2026/05/09 11:48
今天群里面讨论怎么样学习 Harness,Harness 工程我学习这两个: 1. https://github.com/walkinglabs/learn-harness-engineering… 通过这个了解每一个 Harness 的核心机制 2. https://github.com/badlogic/pi-mono… 学习这个框架的各个模块设计实现,不懂的就让 AI 去解读实现逻辑
walkinglabs/learn-harness-engineering
Source: https://github.com/walkinglabs/learn-harness-engineering
English · 中文 · Русский · Tiếng Việt · 한국어
Learn Harness Engineering
A project-based course on building the environment, state management, verification, and control mechanisms that make AI coding agents work reliably.
Learn Harness Engineering is a course dedicated to the engineering of AI coding agents. We have deeply studied and synthesized the most advanced Harness Engineering theories and practices in the industry. Our core references include:
- OpenAI: Harness engineering: leveraging Codex in an agent-first world
- Anthropic: Effective harnesses for long-running agents
- Anthropic: Harness design for long-running application development
- Awesome Harness Engineering
Quick start? The
skills/harness-creator/skill can help you scaffold a production-grade harness (AGENTS.md, feature lists, init.sh, verification workflows) for your own project in minutes.
Table of Contents
- ✨ Visual Preview
- What Harness Engineering Actually Means
- Quick Start: Improve Your Agent Today
- Capstone Project: A Real App
- Learning Path
- Syllabus
- Skills
- Other Courses
✨ Visual Preview
🏠 Course Homepage
A comprehensive course outline and introduction to core philosophies, providing a clear path to get started.

📖 Immersive Lectures
Deep dives into real-world pain points and hands-on projects (like Project 01) for an immersive learning experience.

🗂️ Ready-to-Use Resource Library
Templates and reference configurations designed to solve common pitfalls in multi-turn AI agent development, such as context loss and premature task completion.

PDF Coursebooks
The repository now includes a PDF build pipeline for the course content.
- Run
npm run pdf:buildto generate English and Chinese PDFs locally. - Output files are written to
artifacts/pdfs/. - Run
npm run screenshots:readmeif you want to refresh the README preview images. - GitHub Actions workflow
release-course-pdfs.ymlcan build the PDFs and publish them to GitHub Releases.
The Model Is Smart, The Harness Makes It Reliable
There’s a hard truth most people learn the hard way: the strongest model in the world will still fail on real engineering tasks if you don’t build a proper environment around it.
You’ve probably seen this yourself. You give Claude or GPT a task in your repo. It starts well — reads files, writes code, looks productive. Then something goes wrong. It skips a step. It breaks a test. It says “done” but nothing actually works. You spend more time cleaning up than if you’d done it yourself.
This isn’t a model problem. It’s a harness problem.
The evidence is clear. Anthropic ran a controlled experiment: same model (Opus 4.5), same prompt (“build a 2D retro game editor”). Without a harness, it spent $9 in 20 minutes and produced something that didn’t work. With a full harness (planner + generator + evaluator), it spent $200 in 6 hours and built a game you could actually play. The model didn’t change. The harness did.
OpenAI reported the same thing with Codex: in a well-harnessed repository, the same model goes from “unreliable” to “reliable.” Not a marginal improvement — a qualitative shift.
This course teaches you how to build that environment.
THE HARNESS PATTERN
====================
You --> give task --> Agent reads harness files --> Agent executes
|
harness governs every step:
|
+--> Instructions: what to do, in what order
+--> Scope: one feature at a time, no overreach
+--> State: progress log, feature list, git history
+--> Verification: tests, lint, type-check, smoke runs
+--> Lifecycle: init at start, clean state at end
|
v
Agent stops only when
verification passes
What Harness Engineering Actually Means
Harness engineering is about building a complete working environment around the model so it produces reliable results. It’s not about writing better prompts. It’s about designing the system the model operates inside.
A harness has five subsystems:
┌─────────────────────────────────────────────────────────────────┐
│ THE HARNESS │
│ │
│ ┌──────────────┐ ┌──────────────┐ ┌──────────────────────┐ │
│ │ Instructions │ │ State │ │ Verification │ │
│ │ │ │ │ │ │ │
│ │ AGENTS.md │ │ progress.md │ │ tests + lint │ │
│ │ CLAUDE.md │ │ feature_list │ │ type-check │ │
│ │ feature_list │ │ git log │ │ smoke runs │ │
│ │ docs/ │ │ session hand │ │ e2e pipeline │ │
│ └──────────────┘ └──────────────┘ └──────────────────────┘ │
│ │
│ ┌──────────────┐ ┌──────────────────────────────────────┐ │
│ │ Scope │ │ Session Lifecycle │ │
│ │ │ │ │ │
│ │ one feature │ │ init.sh at start │ │
│ │ at a time │ │ clean-state checklist at end │ │
│ │ definition │ │ handoff note for next session │ │
│ │ of done │ │ commit only when safe to resume │ │
│ └──────────────┘ └──────────────────────────────────────┘ │
│ │
└─────────────────────────────────────────────────────────────────┘
The MODEL decides what code to write.
The HARNESS governs when, where, and how it writes it.
The harness doesn't make the model smarter.
It makes the model's output reliable.
Each subsystem has one job:
- Instructions — Tell the agent what to do, in what order, and what to read before starting. Not one giant file; a progressive disclosure structure the agent navigates on demand.
- State — Track what’s been done, what’s in progress, and what’s next. Persisted to disk so the next session picks up exactly where the last one left off.
- Verification — Only a passing test suite counts as evidence. The agent cannot declare victory without runnable proof.
- Scope — Constrain the agent to one feature at a time. No overreach. No half-finishing three things. No rewriting the feature list to hide unfinished work.
- Session Lifecycle — Initialize at the start. Clean up at the end. Leave a clean restart path for the next session.
Why This Course Exists
The question isn’t “can models write code?” They can. The question is: can they reliably complete real engineering tasks inside real repositories, over multiple sessions, without constant human supervision?
Right now, the answer is: not without a harness.
WITHOUT HARNESS WITH HARNESS
============== ============
Session 1: agent writes code Session 1: agent reads instructions
agent breaks tests agent runs init.sh
agent says "done" agent works on one feature
you fix it manually agent verifies before claiming done
agent updates progress log
Session 2: agent starts fresh agent commits clean state
agent has no memory
of what happened before Session 2: agent reads progress log
agent re-does work agent picks up exactly where it left off
or does something else entirely agent continues the unfinished feature
you fix it again you review, not rescue
Result: you spend more time Result: agent does the work,
cleaning up than if you you verify the result
did it yourself
The questions this course actually cares about:
- Which harness designs improve task completion rates?
- Which designs reduce rework and incorrect completions?
- Which mechanisms keep long-running tasks progressing steadily?
- Which structures keep the system maintainable after multiple agent runs?
Course Curriculum & Documentation
For the full course materials, please visit the Documentation Website.
The curriculum is divided into three parts:
- Lectures: 12 conceptual units explaining the theory behind harness engineering.
- Projects: 6 hands-on projects where you build an agentic workspace from scratch.
- Resource Library: Copy-ready templates (
AGENTS.md,feature_list.json,init.sh, etc.) to use in your own repositories today.
Quick Start: Improve Your Agent Today
You don’t need to read all 12 lectures before you start getting value. If you’re already using a coding agent on a real project, here’s how to improve it right now.
The idea is simple: instead of just writing prompts, give your agent a set of structured files that define what to do, what’s been done, and how to verify the work. These files live inside your repo, so every session starts from the same state.
YOUR PROJECT ROOT
├── AGENTS.md <-- the agent's operating manual
├── CLAUDE.md <-- (alternative, if using Claude Code)
├── init.sh <-- runs install + verify + start
├── feature_list.json <-- what features exist, which are done
├── claude-progress.md <-- what happened each session
└── src/ <-- your actual code
Grab the starter templates from the Resource Library and drop them into your project. That’s it. Four files, and your agent sessions will already be significantly more stable than running on prompts alone.
Capstone Project: A Real App
All six course projects revolve around the same product: an Electron-based personal knowledge base desktop app.
┌─────────────────────────────────────────────────────┐
│ Knowledge Base Desktop App │
│ │
│ ┌──────────────┐ ┌──────────────────────────────┐│
│ │ Document List │ │ Q&A Panel ││
│ │ │ │ ││
│ │ doc-001.md │ │ Q: What is harness eng? ││
│ │ doc-002.md │ │ A: The environment built ││
│ │ doc-003.md │ │ around an agent model... ││
│ │ ... │ │ [citation: doc-002.md] ││
│ └──────────────┘ └──────────────────────────────┘│
│ │
│ ┌─────────────────────────────────────────────────┐│
│ │ Status Bar: 42 docs | 38 indexed | last sync 3m ││
│ └─────────────────────────────────────────────────┘│
└─────────────────────────────────────────────────────┘
Core features:
├── Import local documents
├── Manage a document library
├── Process and index documents
├── Run AI-powered Q&A over imported content
└── Return grounded answers with citations
This project was chosen because it combines strong practical value, enough real-world product complexity, and a good setting for observing before/after harness improvements.
Each course project’s starter/solution is a complete copy of this Electron app at that evolutionary stage. P(N+1)’s starter is derived from P(N)’s solution — the app evolves as your harness skills grow.
Learning Path
The course is designed to be done in order. Each phase builds on the last.
Phase 1: SEE THE PROBLEM Phase 2: STRUCTURE THE REPO
======================== ==========================
L01 Strong models ≠ reliable L03 Repository as single
execution source of truth
L02 What harness actually means
L04 Split instructions across
| files, not one giant file
v
P01 Prompt-only vs. |
rules-first comparison v
P02 Agent-readable workspace
Phase 3: CONNECT SESSIONS Phase 4: FEEDBACK & SCOPE
========================== =========================
L05 Keep context alive L07 Draw clear task boundaries
across sessions
L08 Feature lists as harness
L06 Initialize before every primitives
agent session
|
| v
v P04 Runtime feedback to
P03 Multi-session continuity correct agent behavior
Phase 5: VERIFICATION Phase 6: PUT IT ALL TOGETHER
===================== ============================
L09 Stop agents from L11 Make agent's runtime
declaring victory early observable
L10 Full-pipeline run = L12 Clean handoff at end of
real verification every session
| |
v v
P05 Agent verifies its own work P06 Build a complete harness
(capstone project)
Each phase takes about a week if you’re going part-time. If you want to go faster, phases 1–3 can be done in a long weekend.
Syllabus
Lectures — 12 conceptual units, each answering one core question
Read the full text for each lecture on the Documentation Website.
| Session | Question | Core Idea |
|---|---|---|
| L01 | Why do strong models still fail on real tasks? | The capability gap between benchmarks and real engineering |
| L02 | What does “harness” actually mean? | Five subsystems: instructions, state, verification, scope, lifecycle |
| L03 | Why must the repo be the single source of truth? | If the agent can’t see it, it doesn’t exist |
| L04 | Why does one giant instruction file fail? | Progressive disclosure: give a map, not an encyclopedia |
| L05 | Why do long-running tasks lose continuity? | Persist progress to disk; pick up where you left off |
| L06 | Why does initialization need its own phase? | Verify the environment is healthy before the agent starts work |
| L07 | Why do agents overreach and under-finish? | One feature at a time; explicit definition of done |
| L08 | Why are feature lists harness primitives? | Machine-readable scope boundaries the agent can’t ignore |
| L09 | Why do agents declare victory too early? | Verification gaps: confidence ≠ correctness |
| L10 | Why does end-to-end testing change results? | Only a full-pipeline run counts as real verification |
| L11 | Why does observability belong inside the harness? | If you can’t see what the agent did, you can’t fix what it broke |
| L12 | Why must every session leave a clean state? | The next session’s success depends on this session’s cleanup |
Projects — 6 hands-on projects applying lecture methods to the same Electron app
| Project | What You Do | Harness Mechanism |
|---|---|---|
| P01 | Run the same task twice: prompt-only vs. rules-first | Minimal harness: AGENTS.md + init.sh + feature_list.json |
| P02 | Restructure the repo so the agent can read it | Agent-readable workspace + persistent state files |
| P03 | Make the agent pick up from where it left off | Progress log + session handoff + multi-session continuity |
| P04 | Stop the agent from doing too much or too little | Runtime feedback + scope control + incremental indexing |
| P05 | Make the agent verify its own work | Self-verification + grounded Q&A + evidence-based completion |
| P06 | Build a complete harness from scratch (capstone) | Full harness: all mechanisms + observability + ablation study |
PROJECT EVOLUTION
=================
P01 Prompt-only vs. rules-first You see the problem
|
v
P02 Agent-readable workspace You restructure the repo
|
v
P03 Multi-session continuity You connect sessions
|
v
P04 Runtime feedback & scope You add feedback loops
|
v
P05 Self-verification You make the agent check itself
|
v
P06 Complete harness (capstone) You build the full system
Each project's solution becomes the next project's starter.
The app evolves. Your harness skills grow with it.
Resource Library
- English Resource Library — templates, checklists, and method references
- Chinese Resource Library — 中文模板、清单和方法参考
- Russian Resource Library — шаблоны, чек-листы и справочники
- Vietnamese Resource Library — mẫu, danh sách kiểm tra và tài liệu tham khảo
The Agent Session Lifecycle
One of the core ideas in this course: the agent’s session should follow a structured lifecycle, not a free-for-all. Here’s what that looks like:
AGENT SESSION LIFECYCLE
======================
┌──────────────────────────────────────────────────────────────────┐
│ START │
│ │
│ 1. Agent reads AGENTS.md / CLAUDE.md │
│ 2. Agent runs init.sh (install, verify, health check) │
│ 3. Agent reads claude-progress.md (what happened last time) │
│ 4. Agent reads feature_list.json (what's done, what's next) │
│ 5. Agent checks git log (recent changes) │
│ │
│ SELECT │
│ │
│ 6. Agent picks exactly ONE unfinished feature │
│ 7. Agent works only on that feature │
│ │
│ EXECUTE │
│ │
│ 8. Agent implements the feature │
│ 9. Agent runs verification (tests, lint, type-check) │
│ 10. If verification fails: fix and re-run │
│ 11. If verification passes: record evidence │
│ │
│ WRAP UP │
│ │
│ 12. Agent updates claude-progress.md │
│ 13. Agent updates feature_list.json │
│ 14. Agent records what's still broken or unverified │
│ 15. Agent commits (only when safe to resume) │
│ 16. Agent leaves clean restart path for next session │
│ │
└──────────────────────────────────────────────────────────────────┘
The harness governs every transition in this lifecycle.
The model decides what code to write at each step.
Without the harness, step 9 becomes "agent says it looks fine."
With the harness, step 9 is "tests pass, lint is clean, types check."
Who This Is For
This course is for:
- Engineers already using coding agents who want better stability and quality
- Researchers or builders who want a systematic understanding of harness design
- Tech leads who need to understand how environment design affects agent performance
This course is not for:
- People looking for a zero-code AI introduction
- People who only care about prompts and don’t plan to build real implementations
- Learners not prepared to let agents work inside real repositories
Requirements
This is a course where you actually run coding agents.
You need at least one of these tools:
- Claude Code
- Codex
- Another IDE or CLI coding agent that supports file editing, command execution, and multi-step tasks
The course assumes you can:
- Open a local repository
- Allow the agent to edit files
- Allow the agent to run commands
- Inspect output and re-run tasks
If you don’t have such a tool, you can still read the course content, but you won’t be able to complete the projects as intended.
Local Preview
This repository uses VitePress as a documentation viewer.
npm install
npm run docs:dev # Dev server with hot reload
npm run docs:build # Production build
npm run docs:preview # Preview built site
Then open the local URL that VitePress outputs in your browser.
Prerequisites
Required:
- Familiarity with the terminal, git, and local development environments
- Ability to read and write code in at least one common application stack
- Basic software debugging experience (reading logs, tests, and runtime behavior)
- Enough time to commit to implementation-focused coursework
Helpful but not required:
- Experience with Electron, desktop apps, or local-first tools
- Background in testing, logging, or software architecture
- Prior exposure to Codex, Claude Code, or similar coding agents
Core References
Primary:
- OpenAI: Harness engineering: leveraging Codex in an agent-first world
- Anthropic: Effective harnesses for long-running agents
- Anthropic: Harness design for long-running application development
- OpenAI: Unrolling the Codex agent loop
- Anthropic: Demystifying evals for AI agents
- LangChain: Improving Deep Agents with harness engineering
- Thoughtworks / Martin Fowler: Harness engineering for coding agent users
- Cursor: Continually improving our agent harness
See the full layered reference list in docs/en/resources/reference/.
Repository Structure
learn-harness-engineering/
├── docs/ # VitePress documentation site
│ ├── lectures/ # 12 lectures (index.md + code/ examples)
│ │ ├── lecture-01-*/
│ │ ├── lecture-02-*/
│ │ └── ... (12 total)
│ ├── projects/ # 6 project descriptions
│ │ ├── project-01-*/
│ │ └── ... (6 total)
│ └── resources/ # Multilingual templates & references
│ ├── en/ # English templates, checklists, guides
│ ├── zh/ # Chinese templates, checklists, guides
│ ├── ru/ # Russian templates, checklists, guides
│ └── vi/ # Vietnamese templates, checklists, guides
├── projects/
│ ├── shared/ # Shared Electron + TypeScript + React foundation
│ └── project-NN/ # Per-project starter/ and solution/ directories
├── skills/ # Reusable AI agent skills
│ └── harness-creator/ # Harness engineering skill
├── package.json # VitePress + dev tooling
└── CLAUDE.md # Claude Code instructions for this repo
How the Course Is Organized
- Each lecture focuses on one question
- The course includes 6 projects
- Every project requires the agent to do real work
- Every project compares weak vs. strong harness results
- What matters is the measured difference, not how many docs were written
Skills
This repository also includes reusable AI agent skills that you can install directly into your IDE or agent workspace.
- harness-creator: A skill that helps you scaffold a production-grade harness for your own project in minutes.
Other Courses
Our team has also created other courses! Check them out:
Hands-on Modern RL: An open-source, hands-on curriculum bridging the gap from basic RL concepts to LLM alignment, RLVR, and advanced Agentic systems.
Star History
Acknowledgments
This course was inspired by and draws ideas from learn-claude-code — a progressive guide to building an agent from scratch, from a single loop to isolated autonomous execution.
相似文章
面向长时应用开发的Harness设计
Anthropic工程师详细介绍了一种多智能体Harness设计,利用生成器与评估器智能体提升Claude在长时间内自主构建完整、高质量前端应用的能力。
@himanshustwts:Harness 的“无尽传说”——与 @Vtrivedy10 对谈
Twitter 线程/播客片段,邀请来自 Temple University、曾任职 Lockheed Martin 的 Viv,探讨 harness 设计、评测框架、文件系统与强化学习环境等话题。
Agents SDK 的下一步演进
OpenAI 宣布更新其 Agents SDK,引入了模型原生工作台和原生沙箱执行,以帮助开发者构建生产级 AI 代理,并改进文件处理和安全控制。
用于长时间运行代理的有效工具
Anthropic 推出了一种由两部分组成的解决方案,使用初始化代理和编码代理,使 Claude Agent SDK 能够有效处理跨多个上下文窗口的长时间运行任务,并通过保持干净、增量的状态来实现。
datawhalechina/hello-agents
Datawhale社区发布的开源中文教程《从零开始构建智能体》,系统性讲解AI原生智能体的理论与实践,涵盖从基础原理到自研框架HelloAgents的完整学习路径。