rlm

Tag

Cards List
#rlm

@isaac_flath: RLM means notebooks are gonna be back (I hope). Agent driving a REPL with interleaved prose. The exact backend the nb i…

X AI KOLs Following · 2026-04-21 Cached

Isaac Flath predicts RLM will revive notebooks by enabling agents to drive REPLs with interleaved prose.

0 favorites 0 likes
#rlm

@dosco: very cool writeup on applying RLM and DSPy to multi-modal data. this bit really got me thinking...

X AI KOLs Following · 2026-04-20 Cached

A social media post highlighting a writeup on applying RLM and DSPy to multi-modal data.

0 favorites 0 likes
#rlm

@sumeetrm: LongCoT is adding two new leaderboards! Due to the interest in agents (particularly RLMs), we’re adding a “Restricted H…

X AI KOLs Following · 2026-04-19 Cached

LongCoT introduces two new agent leaderboards (Restricted & Open Harness), with GPT 5.2 RLM topping the Open Harness at 25.12%.

0 favorites 0 likes
#rlm

@ekzhu: I read the RLM paper and it’s like, this is the simplest way to solve a general problem, seriously it’s just this simple.

X AI KOLs Timeline · 2026-04-19 Cached

A researcher comments on the simplicity and elegance of the RLM paper, comparing it to the influential ReAct paper and expressing appreciation for its straightforward approach to solving general problems.

0 favorites 0 likes
#rlm

@samhogan: RLMs pretty much solved context btw You can shove tens of millions of tokens into a good RLM harness and it just works.…

X AI KOLs Following · 2026-04-18 Cached

A developer shares their experience with Recurrent Language Models (RLMs), claiming they effectively handle extremely long context windows with tens of millions of tokens, representing a significant advancement in context handling capabilities.

0 favorites 0 likes
← Back to home

Submit Feedback