Tag
This paper shows that continuously consolidating past experiences into textual memory using LLMs degrades memory utility over time, and that preserving raw episodic trajectories outperforms forced consolidation, with implications for robust agentic memory systems.
A study finds that continuously updating consolidated memories in LLM-based agentic systems degrades performance, and that retaining raw episodic trajectories is more reliable. Experiments on ARC-AGI show that even GPT-5.4 fails more often after consolidation.