Tag
Isaac Flath predicts RLM will revive notebooks by enabling agents to drive REPLs with interleaved prose.
A social media post highlighting a writeup on applying RLM and DSPy to multi-modal data.
LongCoT introduces two new agent leaderboards (Restricted & Open Harness), with GPT 5.2 RLM topping the Open Harness at 25.12%.
A researcher comments on the simplicity and elegance of the RLM paper, comparing it to the influential ReAct paper and expressing appreciation for its straightforward approach to solving general problems.
A developer shares their experience with Recurrent Language Models (RLMs), claiming they effectively handle extremely long context windows with tens of millions of tokens, representing a significant advancement in context handling capabilities.