Tag
This paper shows that continuously consolidating past experiences into textual memory using LLMs degrades memory utility over time, and that preserving raw episodic trajectories outperforms forced consolidation, with implications for robust agentic memory systems.
Seed IQ achieves a perfect 14/14 score on ARC-AGI-3 games using an active inference, physics-driven multi-agent autonomous control engine, as shown in a behind-the-scenes video walkthrough.
A study finds that continuously updating consolidated memories in LLM-based agentic systems degrades performance, and that retaining raw episodic trajectories is more reliable. Experiments on ARC-AGI show that even GPT-5.4 fails more often after consolidation.
This research blog post demonstrates that repeatedly rewriting LLM agent experiences into textual 'lessons' often degrades performance rather than improving it. The author finds that episodic memory retention performs better than abstract consolidation across various benchmarks like ARC-AGI and ALFWorld.
The authors present TOPAS, a recursive AI architecture achieving 11.67% on ARC-AGI-2 using a single RTX 4090, aiming to demonstrate that architectural efficiency can outweigh raw compute power.