@jdnichollsc: How we Claude Code by @trq212 Workshops: https://github.com/anthropics/cwc-workshops… Happy Claude Coding! <3 #AI #Clau…

X AI KOLs Following Events

Summary

Anthropic released workshop materials for "Code with Claude" sessions covering model selection, multi-agent systems, AI-assisted product workflows, and eval-driven agent development.

How we Claude Code by @trq212 Workshops: https://t.co/xdXdJbApIn Happy Claude Coding! &lt;3 #AI #Claude #ClaudeCode #Workshop #SanFrancisco https://t.co/TGLHFjmk8D
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/08/26, 11:31 AM

How we Claude Code by @trq212

Workshops: https://t.co/xdXdJbApIn

Happy Claude Coding! <3 #AI #Claude #ClaudeCode #Workshop #SanFrancisco https://t.co/TGLHFjmk8D


anthropics/cwc-workshops

Source: https://github.com/anthropics/cwc-workshops

cwc-workshops

Workshop materials. Not maintained and not accepting contributions.

Materials from Anthropic-run Code with Claude workshops.

Workshops

  • rightmodel/Picking the Right Model: use a Claude Code SKILL to audit an LLM eval suite and sweep it across models and inference parameters (extended thinking, effort) to find the best quality-per-dollar and quality-per-second configuration.
  • agent-decomposition/Compose Multi-Agent Systems with Skills and MCP: decompose a 400-line-prompt inventory agent into skills + code execution + callable_agents on Claude Managed Agents, with evals to verify each step.
  • how-we-claude-code/How We Claude Code: a three-phase walkthrough of an AI-assisted product workflow — interview to spec, four divergent design explorations as static HTML, and a Vite + React app whose components emit a machine-readable DOM contract so an agent (or CI) can verify them at runtime.
  • ship-your-first-managed-agent/Ship Your First Managed Agent: a Streamlit incident dashboard with an offline SRE Agent chat panel. You bring it online by implementing seven small functions in agent.py, each a single Claude Managed Agents API call — until it can grep a 70k-line log in its sandbox, call your local tools, and name the bad commit.
  • agent-battle/Agent Battle: a 45-minute competition to configure a Claude Managed Agent — system prompt, skills, MCP servers, model — that drives a local game bot over MCP. Most diamonds wins, fewest tokens breaks ties; a fast --eval decision-probe loop lets you test config changes in ~30s before committing to a 5-minute run.
  • agents-that-remember/Agents That Remember: start with a Managed Agent that’s visibly amnesiac across sessions, then layer in memory primitives one at a time — a memory store for cross-session persistence, then the Dreaming Service to consolidate past transcripts — going “goldfish to colleague” in 45 minutes.
  • eval-driven-agent-development/Eval-Driven Agent Development: iterate a PPTX-generating Managed Agent through six variants (naive → visual → typography → palette → density → QA-loop), scoring each against a 10-task suite with a two-layer grader (programmatic .pptx XML metrics + LLM-as-judge on rendered slides) so every prompt change is measured, not vibed.
  • production-ready-agent/Deal Desk: a chat-first UI over a multi-agent M&A research team on Claude Managed Agents — a coordinator delegates to four parallel research sub-agents, reads prior-deal lessons from a memory store, reaches Linear via MCP, and emits a graded investment thesis while the UI streams every event and gated tool call.

License

Apache License 2.0. See LICENSE.

Similar Articles

Dive into Claude Code: The Design Space of Today's and Future AI Agent Systems

Hugging Face Daily Papers

This paper analyzes Claude Code's architecture as an agentic coding tool, identifying five human values and thirteen design principles that inform its implementation, including safety systems, context management, and extensibility mechanisms. The study compares Claude Code with OpenClaw to demonstrate how different deployment contexts lead to different architectural solutions for common AI agent design challenges.

Claude Code: Best practices for agentic coding

Anthropic Engineering

This article outlines best practices for using Claude Code, an agentic coding environment by Anthropic. It emphasizes managing context windows, providing verification criteria for code, and separating exploration from execution to improve performance.

Live blog: Code w/ Claude 2026

Simon Willison's Blog

Live blog of Anthropic's Code w/ Claude 2026 event. Updates include multi-agent orchestration for Claude Managed Agents, increased rate limits, and a partnership with SpaceX's Colossus data center. No new model announced.