I built a semantic mistake memory layer for agents and put it on PyPI

Reddit r/AI_Agents Tools

Summary

DriftGuard is a PyPI package that adds a semantic memory layer for AI agents, allowing them to remember past mistakes and avoid repeating them by comparing proposed actions against a graph of past failures.

So I kept running into the same problem with every agent pipeline I built. The agent would make a mistake, you'd give it feedback, it would fix it, and then three runs later it'd make the exact same mistake again. No memory of what went wrong. Every run starts completely fresh. I built DriftGuard to fix that. The idea is simple: it sits between intent and execution. Before your agent takes a step, DriftGuard reviews the proposed action against a semantic graph of past failures. If it finds something similar to a past mistake, it surfaces a warning before the action runs. After execution, you record the outcome and the graph grows smarter. So if your agent once ran a destructive DB migration without a backup and you recorded that, the next time it proposes something semantically similar, it gets flagged before it runs. Not by exact string match. By meaning. A few things I wanted to get right: \- Guard policies are configurable per step. warn just surfaces the warning and lets the agent decide. block raises an exception and hard stops. acknowledge requires explicit confirmation. record\_only skips the review and just stores memory. You pick based on how much risk you're willing to take on each action. \- The memory graph merges paraphrased variants automatically. If the agent phrases the same mistake five different ways, they collapse into one node. It doesn't keep growing forever, stale weak memories get pruned on a schedule. \- It runs as a standalone MCP server or drops directly into LangGraph as a review node. Tried to make it fit wherever your pipeline already lives. pip install driftguard-ai Still early but it's in a usable state. Would love feedback, especially from anyone building agents that run autonomously for long periods. That's the use case I built it for.
Original Article

Similar Articles

rohitg00/agentmemory

GitHub Trending (daily)

agentmemory is an open-source persistent memory layer for AI coding agents (Claude Code, Cursor, Gemini CLI, Codex CLI, etc.) that uses knowledge graphs, confidence scoring, and hybrid search to give agents long-term memory across sessions via MCP, hooks, or REST API. Built on the iii engine, it requires no external databases and exposes 51 MCP tools.

Zep: A Temporal Knowledge Graph Architecture for Agent Memory

Papers with Code Trending

This paper introduces Zep, a temporal knowledge graph architecture for agent memory that outperforms MemGPT in benchmarks like DMR and LongMemEval. It highlights Zep's ability to handle dynamic knowledge integration and temporal reasoning for enterprise use cases.