Tag
The author reflects on experimenting with custom AI agents, noting that long-term memory and continuity transform them from simple task runners into persistent collaborators with 'stable dispositions'. This raises questions about the value of agent 'personality' versus the need for control, reliability, and auditability in workflows.
The author releases IaI-mcp, an open-source local daemon that provides persistent, tiered memory for Claude sessions by storing verbatim conversations and managing context retrieval without requiring manual prompts.
The article provides a comprehensive technical overview of how AI agent memory works, distinguishing between working and long-term memory mechanisms, and discussing strategies for context management, embedding-based retrieval, and data lifecycle governance.
Researchers introduce Memora, a benchmark that evaluates LLMs’ ability to retain, update, and forget long-term user memories over weeks-to-months conversations, revealing frequent reuse of obsolete memories.
HeLa-Mem is a bio-inspired memory architecture for LLM agents that models memory as a dynamic graph using Hebbian learning dynamics, featuring episodic and semantic memory stores to improve long-term coherence. Experiments on LoCoMo show superior performance across question categories while using fewer context tokens.
PersonaVLM introduces a personalized multimodal LLM framework that enables long-term user adaptation through memory retention, multi-turn reasoning, and response alignment, outperforming GPT-4o by 5.2% on the new Persona-MME benchmark.