Models

Cards List

@tom_doerr: Fully open sources training data for 30B scale search agents https://github.com/PolarSeeker/OpenSeeker…

X AI KOLs Timeline · 5h ago Cached

OpenSeeker fully open-sources training data and models for 30B-scale ReAct-based search agents, achieving state-of-the-art performance on multiple benchmarks including BrowseComp and Humanity's Last Exam. It is the first purely academic project to reach frontier search benchmark performance while releasing complete training data.

0 favorites 0 likes

@garrytan: Downloading now... 1M token context window with supposedly usable coding agent capability all on a 128GB Macbook Pro is

X AI KOLs Following · 5h ago Cached

Garry Tan highlights a model with a 1M token context window and coding agent capabilities running locally on a 128GB MacBook Pro, expressing excitement about the milestone.

0 favorites 0 likes

@davis7: @0xSero helped me setup local models properly and I uh, had no idea these things had gotten this good Are they frontier…

X AI KOLs Following · 9h ago

The author highlights the impressive capabilities of the open-source Qwen 3.6-27B model running locally on an RTX 5090, noting its strong performance on programming tasks and comparing it favorably to commercial models, despite the complexity of local deployment.

0 favorites 0 likes

@cyrilXBT: CHINA JUST BUILT AN AI MODEL THAT IS COMPETING WITH OPENAI AND ANTHROPIC AT A FRACTION OF THE COST. And someone just dr…

X AI KOLs Timeline · 9h ago

DeepSeek, a Chinese AI model built by a quant hedge fund, is reportedly competing with GPT-4 level performance at roughly 5% of the training cost, causing significant market disruption including a $600B drop in NVIDIA's market cap. A free 1 hour 50 minute course has been released teaching users how to leverage DeepSeek V4 locally and via API.

0 favorites 0 likes

Has anyone messed around with song generation using Google's Lyria 3 Pro? This was 8 cents in API credits, and the first thing I ever generated...

Reddit r/singularity · 10h ago Cached

A community member shares their hands-on experience generating a track using Google's Lyria 3 Pro via its API, noting the minimal cost and initial quality of the output.

0 favorites 0 likes

Those of you who like Gemma4 models - how are you guys using them?

Reddit r/LocalLLaMA · 10h ago

A developer shares their mixed experience running Gemma4 and Qwen locally for coding tasks, noting issues with tool integration, loop handling, and task completion while asking the community for better usage strategies.

0 favorites 0 likes

Qwen3.6 35B A3B uncensored heretic Native MTP Preserved is Out Now With KLD 0.0015, 10/100 Refusals and the Full 19 MTPs Preserved and Retained, Available in Safetensors, GGUFs. NVFP4, NVFP4 GGUFs and GPTQ-Int4 Formats

Reddit r/LocalLLaMA · 11h ago

Community release of Qwen3.6 35B A3B uncensored variant with full 19 MTP tensors preserved, available in multiple formats including Safetensors, GGUF, NVFP4 and GPTQ-Int4.

0 favorites 0 likes

@libapi_: Today, Hermes Agent secured the number one spot globally. This isn't just a ranking—it reflects the combined push from the open-source community, developers, contributors, and every real user. I'm also thrilled to see more AI Agent projects on @OpenRouter gaining visibility. CLI, Personal Agents, automated workflows, …

X AI KOLs Timeline · 11h ago

Hermes Agent tops the global rankings, highlighting the collaborative drive of the open-source community and developers, while signaling that the AI Agent ecosystem is rapidly scaling across platforms like OpenRouter.

0 favorites 0 likes

@Teknium: We just hit number one globally across all AI apps on OpenRouter. Super grateful to the nearly 1000 contributors who've…

X AI KOLs Following · 11h ago Cached

The Hermes Agent model has reached the top global ranking across all AI applications on OpenRouter, powered by contributions from nearly 1,000 developers. The creator thanks the community and invites suggestions for future improvements.

0 favorites 0 likes

@NousResearch: Hermes Agent is now #1 on the Global @OpenRouter token rankings. While our journey together has just begun, we'd like t…

X AI KOLs Following · 12h ago Cached

Hermes Agent from NousResearch has reached #1 position on OpenRouter's global token rankings, marking a significant achievement for the AI agent.

0 favorites 0 likes

@reach_vb: in the last ~15 days we shipped: - gpt image 2 - privacy filter - gpt 5.5 - gpt 5.5 pro - gpt 5.5 instant - gpt realtim…

X AI KOLs Following · 13h ago Cached

OpenAI shipped multiple GPT models and features in approximately 15 days, including GPT Image 2, various GPT 5.5 variants (pro, instant, cyber), GPT Realtime 2, and related tools.

0 favorites 0 likes

new MoE from ai2, EMO

Reddit r/LocalLLaMA · 15h ago

AI2 released EMO, a Mixture of Experts language model with 1B active parameters out of 14B total, trained on 1 trillion tokens and featuring document-level routing where experts cluster around domains.

0 favorites 1 likes

@no_stp_on_snek: mrcr v2 8-needle at 1m, open weights stack, single rented mi300x. longctx directional 0.688 (n=30, mass-val rerun pendi…

X AI KOLs Following · 16h ago Cached

Shares early benchmark scores and evaluation metrics for an open-weight model stack run on a single AMD MI300X, noting competitive performance against closed-source alternatives.

0 favorites 0 likes

CyberSecQwen-4B: Why Defensive Cyber Needs Small, Specialized, Locally-Runnable Models

Hugging Face Blog · 18h ago Cached

CyberSecQwen-4B is a small, specialized 4B parameter model fine-tuned for defensive cybersecurity tasks, designed to run locally on a single GPU, addressing privacy, cost, and air-gapped deployment needs.

1 favorites 1 likes

EMO: Pretraining mixture of experts for emergent modularity

Hugging Face Blog · 20h ago Cached

Allen AI releases EMO, a mixture-of-experts model where modular structure emerges naturally from data, enabling use of just 12.5% of experts for a task while maintaining near full-model performance.

0 favorites 0 likes

Ring 2.6 1T

Reddit r/LocalLLaMA · 20h ago

Ring 2.6 1T, a 1-trillion parameter model with open weights, has been listed on Open Router for free use, with expectations of a full public release.

0 favorites 0 likes

@heyrobinai: THE ENTIRE AI INDUSTRY JUST GOT HUMILIATED a tiny model trained in just a few hours on a single graphics card is planni…

X AI KOLs Timeline · 23h ago

Yann LeCun's team releases LeWorldModel, a tiny 15M-parameter physics model trained on a single GPU in hours that outperforms billion-dollar foundation models in planning speed and physical plausibility, challenging the dominant scaling paradigm.

0 favorites 0 likes

OpenAI's New Voice Models Want to Do More Than Talk Back

Reddit r/ArtificialInteligence · 23h ago Cached

OpenAI has launched three new real-time audio models to enable continuous, multitasking voice interactions that prioritize long-context reasoning, live translation, and seamless tool use.

0 favorites 0 likes

@paulabartabajo_: Advice for AI engineers If you're building voice agents, stop wiring up 3 separate models, for audio-to-text, text-to-a…

X AI KOLs Timeline · yesterday Cached

Announces liquid-audio, an open-source repository for Liquid AI's end-to-end speech-to-speech LFM models (LFM2-Audio-1.5B and LFM2.5-Audio-1.5B) with interleaved and sequential generation modes and fine-tuning support.

0 favorites 0 likes

MemReranker: Reasoning-Aware Reranking for Agent Memory Retrieval

arXiv cs.CL · yesterday Cached

MemReranker is a reasoning-aware reranking model family (0.6B/4B) designed for agent memory retrieval, addressing limitations in semantic similarity by incorporating LLM knowledge distillation for better temporal and causal reasoning.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback