best-practices

Tag

Cards List
#best-practices

@shadouyoua: Recently, the ByteDance TRAE team released the '2026 Enterprise AI Programming Practice Manual', which includes a noteworthy section: their summarized 'Top 10 Agent Skills'. This is the first AI programming skill recommendation list I have seen that has been publicly compiled by a major tech company. ...

X AI KOLs Timeline · 15h ago

The ByteDance TRAE team has released the '2026 Enterprise AI Programming Practice Manual' and published an internally compiled list of the Top 10 recommended Agent Skills. This list highlights the importance of frontend design, code review, and automated testing, showcasing best practices from a major tech player in the field of AI-assisted programming.

0 favorites 0 likes
#best-practices

@zodchiii: Three Anthropic engineers just spent 16 minutes on what makes AI agents actually succeed in production. If the people w…

X AI KOLs Timeline · yesterday Cached

Anthropic engineers share insights on making AI agents succeed in production, highlighting proven patterns from their work on Claude.

0 favorites 0 likes
#best-practices

Structured Outputs are not as portable as they look

Reddit r/AI_Agents · yesterday

The author shares findings on the lack of portability for JSON Schema structured outputs across AI providers like OpenAI, Gemini, and Anthropic, highlighting inconsistencies in constraint enforcement and offering practical advice for robust integration.

0 favorites 0 likes
#best-practices

After building agent teams for a dozen clients, here's what actually made them trust the system (and stop babysitting it)

Reddit r/AI_Agents · yesterday

The author shares practical insights on building client trust in AI agent systems, emphasizing the importance of narrow scope, robust error handling, and clear communication of system status.

0 favorites 0 likes
#best-practices

The biggest lie in AI agents right now is that more autonomy automatically means more value

Reddit r/AI_Agents · yesterday

The article argues that high autonomy in AI agents increases the cost of errors, advocating instead for constrained, reliable agents that prioritize safety and predictability over unrestricted capability.

0 favorites 0 likes
#best-practices

@yoheinakajima: great article, mostly focused on coding agents but applies elsewhere impo. aligns w a lot of my prior thoughts: - agent…

X AI KOLs Following · 2d ago

A tweet highlighting key principles for building agent systems, emphasizing scaffolding, memory, and reusable tools, based on an article by Yohei Nakajima.

0 favorites 0 likes
#best-practices

@akshay_pachaar: As an AI Engineer. Please learn: - Harness engineering, not just prompt engineering - Prompt caching vs. semantic cachi…

X AI KOLs Following · 2d ago

Akshay Pachaar outlines essential skills for AI engineers beyond prompt engineering, including caching strategies, observability, and cost attribution.

0 favorites 0 likes
#best-practices

10 things I'd tell anyone starting to build AI agents in production

Reddit r/AI_Agents · 2d ago

A practitioner shares ten critical lessons for deploying AI agents in production, emphasizing code-based constraints, context management, and security over relying solely on prompts.

0 favorites 0 likes
#best-practices

Some notes and lessons on Agents, RAG and memory

Reddit r/AI_Agents · 3d ago

The author shares notes and lessons learned from building AI agents at scale, focusing on RAG and memory management to help others.

0 favorites 0 likes
#best-practices

@kettanaito: More and more people are asking me about testing resources so let's put everything I've written in one post. Bookmark, …

X AI KOLs Following · 3d ago Cached

The author consolidates a series of articles on software testing fundamentals, covering topics such as the purpose of testing, assertions, code coverage, and handling flaky tests.

0 favorites 0 likes
#best-practices

I stopped trying to build one super-agent and split it into 4 narrow agents. Reliability went way up.

Reddit r/AI_Agents · 3d ago

The author describes improving AI agent reliability by replacing a single general-purpose agent with a four-agent workflow specializing in intake, research, action, and review. This shift prioritized system predictability and easier debugging over raw autonomy.

0 favorites 0 likes
#best-practices

@Vtrivedy10: my fave point from here: the earlier you think about your agent as a system that can be measured & improved, the faster…

X AI KOLs Following · 4d ago

The author emphasizes the importance of treating AI agents as measurable systems early in development, using evaluations as the primary substrate for improvement and production readiness.

0 favorites 0 likes
#best-practices

@Mnilax: https://x.com/Mnilax/status/2053116311132155938

X AI KOLs Timeline · 4d ago Cached

The article details an expanded 12-rule CLAUDE.md configuration template that builds upon Andrej Karpathy's original 4 rules to further reduce AI coding errors and handle complex agent orchestration issues.

0 favorites 0 likes
#best-practices

@SaitoWu: https://x.com/SaitoWu/status/2053101671035851216

X AI KOLs Timeline · 4d ago Cached

The article summarizes a talk by Matt Pocock criticizing 'specs-to-code' approaches, arguing that solid software engineering fundamentals like TDD and modular design are more critical than ever for effectively using AI coding assistants like Claude Code.

0 favorites 0 likes
#best-practices

@shao__meng: The Internal Design, Iteration, and Maintenance of Agent Skills at Perplexity. The public version of Perplexity Agents' internal standards presents a counter-intuitive core argument: writing a Skill is not about writing code, but about building context for the model. Applying the instinct of engineers writing code directly to Skills...

X AI KOLs Timeline · 4d ago Cached

The Perplexity team has published guidelines for the design, iteration, and maintenance of Agent Skills, emphasizing that writing Skills is not traditional coding but rather constructing context for the model. The article proposes a counter-intuitive methodology focused on evaluation-first approaches, progressive loading, and optimizing Agent behavior by handling edge cases (Gotchas).

0 favorites 0 likes
#best-practices

@yaohui12138: I've finished reading it. Here are some key takeaways I've compiled for everyone: In this session, he primarily broke down a core mechanism overlooked by 90% of users: the CLAUDE.md context injection system. This system is divided into three levels: Enterprise-level: Organization-wide mandatory rules that cannot be overridden by individual settings. Project-level: Team-shared code standards and workflows. Loc...

X AI KOLs Timeline · 4d ago

The article shares key insights from a workshop by Boris on using CLAUDE.md for context injection in Claude, highlighting three usage levels, specific commands like /loop, and plan mode to improve developer workflows.

0 favorites 0 likes
#best-practices

@ghumare64: https://x.com/ghumare64/status/2052825541057626258

X AI KOLs Timeline · 5d ago Cached

An X thread arguing that production AI agents need operational scaffolding (runbooks, permissions, logs, rollback, verification) rather than just better prompts. The author draws parallels to DevOps evolution, stating that prompts provide advice while runbooks provide control, and that agent systems require platform engineering solutions for permissions, state management, verification, observability, and rollback capabilities.

0 favorites 0 likes
#best-practices

Bjarne Stroustrup: How do I deal with memory leaks?

Hacker News Top · 5d ago Cached

Bjarne Stroustrup answers common questions about memory leaks in C++, providing guidance on modern C++ memory management techniques.

0 favorites 0 likes
#best-practices

How to build your first Claude agent. The part most tutorials leave out.

Reddit r/AI_Agents · 5d ago

This article explains how to build a Claude agent using Python, emphasizing the importance of handling tool failure cases effectively rather than just relying on happy-path scenarios.

0 favorites 0 likes
#best-practices

AI agents fail in ways nobody writes about. Here's what I've actually seen.

Reddit r/artificial · 5d ago

The article highlights practical system-level failures in AI agent workflows, such as context bleed and hallucinated details, arguing that these are often infrastructure issues rather than model defects.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback