sota

Tag

Cards List
#sota

@ash_csx: We’re dropping two open source SLMs this week. 1. One of them matches SOTA accuracy at up to 93x smaller. 2. The other …

X AI KOLs Following · 2d ago Cached

Two new open-source small language models are being released: one matches state-of-the-art accuracy at up to 93x smaller size, and the other outperforms a recent OpenAI model. The first model drops tomorrow.

0 favorites 0 likes
#sota

@oliviscusAI: Someone open-sourced a memory layer that beats every RAG system on the planet. It's called Memvid. +35% SOTA on LoCoMo.…

X AI KOLs Timeline · 3d ago Cached

A new open-source memory layer called Memvid claims to outperform all existing RAG systems, achieving +35% SOTA on LoCoMo and +76% on multi-hop reasoning, packaged as a single .mv2 file.

0 favorites 0 likes
#sota

Xiaomi released their SOTA model, MiMo-V2.5-Pro.

Reddit r/singularity · 2026-04-22

Xiaomi launched MiMo-V2.5-Pro, claiming state-of-the-art performance.

0 favorites 0 likes
#sota

@KKaWSB: Moonshot just open-sourced Kimi K2.6—4,000 tool calls in one 12-hour session, 300 sub-agents in parallel building a full codebase. SOTA on SWE-Bench Pro, BrowseComp, HLE and more, ties Claude Opus 4.6 and G…

X AI KOLs Timeline · 2026-04-20 Cached

Moonshot has open-sourced the Kimi K2.6 model, supporting 4,000 tool calls in a single session and 300 parallel sub-agents, achieving SOTA on benchmarks like SWE-Bench Pro and claiming performance on par with Claude Opus 4.6 and GPT-5.4.

0 favorites 0 likes
#sota

Kimi K2.6

Product Hunt · 2026-04-20

Kimi K2.6 is released as an open-source model that achieves state-of-the-art performance on long-horizon coding and agent swarm benchmarks.

0 favorites 0 likes
#sota

@sumeetrm: LongCoT is adding two new leaderboards! Due to the interest in agents (particularly RLMs), we’re adding a “Restricted H…

X AI KOLs Following · 2026-04-19 Cached

LongCoT introduces two new agent leaderboards (Restricted & Open Harness), with GPT 5.2 RLM topping the Open Harness at 25.12%.

0 favorites 0 likes
← Back to home

Submit Feedback