Newest

All articles, most recently crawled first.

Cards List

@ClementDelangue: Paper of the day! https://huggingface.co/papers/2605.13301…

X AI KOLs Following · 4h ago Cached

A paper introduces a unified recipe (SU-01) that combines reverse-perplexity curriculum, two-stage reinforcement learning, and test-time scaling to achieve gold-medal-level performance on IMO and IPhO problems using a 30B-A3B backbone.

0 favorites 0 likes

@JiaZhihao: Introducing Motus Tracing: open-source observability for AI agents. Without traces, an agent is a black box that burns …

X AI KOLs Timeline · 5h ago Cached

Motus Tracing is a fully open-source observability layer for AI agents that captures every model call, tool call, sandbox interaction, and error, providing a unified span model for local development and cloud deployment with zero setup cost.

0 favorites 0 likes

@steeve: Initial @Zai_org's DFlash implementation in @zml_ai (and soon in zml/llmd)

X AI KOLs Following · 8h ago Cached

Initial DFlash implementation by Zai_org is integrated into ZML AI, with plans to include it in zml/llmd.

0 favorites 0 likes

@svpino: For the first time, I feel open-weight models are impossible to ignore. We are at a point where these models are compet…

X AI KOLs Following · 3h ago

Santiago (@svpino) highlights MiniMax-M2.7, a 230B open-weight model that rivals top proprietary models like Opus 4.6 and GPT-5.4, achieving 440+ tokens/s inference on SambaNova at low cost.

0 favorites 0 likes

@gitlawb: Openclaude v0.11.0 just shipped with a FREE frontier-grade LLM out of the box via OpenGateway No API key. No signup. No…

X AI KOLs Timeline · yesterday Cached

Openclaude v0.11.0 has been released, featuring a free frontier-grade LLM accessible via OpenGateway without requiring an API key or signup.

0 favorites 0 likes

@dabit3: Agent hooks extend frameworks and CLIs with custom controls, turning repeatable rules into deterministic behavior inste…

X AI KOLs Following · 5h ago

A tutorial on agent hooks that extend frameworks and CLIs with custom controls for deterministic behavior instead of relying on prompt instructions.

0 favorites 0 likes

@ericjang11: For the last few months I've been working on a from-scratch implementation of AlphaGo, a 2016 AI breakthrough that insp…

X AI KOLs Following · 3h ago Cached

Eric Jang releases AutoGo, a from-scratch tutorial for implementing AlphaGo, including code and a playable bot, demonstrating that frontier capabilities can now be replicated affordably.

0 favorites 0 likes

@eigensteve: I Wrote a New Book!!! Optimization: A Bootcamp for Machine Learning, Inverse Problems, and Control Pre-Order Now (July …

X AI KOLs Timeline · 4h ago Cached

Steven Brunton announces his new book 'Optimization: A Bootcamp for Machine Learning, Inverse Problems, and Control', with pre-order available and accompanying free PDF, YouTube videos, and Python code.

0 favorites 0 likes

@tom_doerr: AI agents for data analysis, plugins, and web browsing https://github.com/xlang-ai/OpenAgents…

X AI KOLs Timeline · 8h ago Cached

OpenAgents is an open platform for using and hosting language agents in everyday life, featuring agents for data analysis, plugins, and web browsing, with open code and a demo.

0 favorites 0 likes

@HowToAI_: Someone built a tool that lets Claude Code autonomously test your entire IOS app It navigate your entire app, opens eve…

X AI KOLs Timeline · 3h ago Cached

A new tool built on Claude Code enables autonomous testing of iOS apps by navigating every screen, testing flows, reading debug logs, and producing structured bug reports from a single prompt.

0 favorites 0 likes

@nicekate8888: For the past twenty days, I've been obsessing over one thing — how to make Qwen3.6-27B run fast and well on my Mac. I started with Unsloth Q5, got 18 tok/s, and the fan was roaring. Then I switched to MLX 6bit + DFlash, hitting 22 tok/s, still not fast enough. Eventually I found MTPLX 4bit: 43 tok/s with good quality.

X AI KOLs Timeline · 6h ago

The user shares their experience optimizing Qwen3.6-27B inference speed on a Mac using different quantization methods (Unsloth Q5, MLX 6bit + DFlash, MTPLX 4bit), ultimately reaching 43 tok/s.

0 favorites 0 likes

Andy Jassy Is Rewriting Amazon’s Playbook for the AI Age

Reddit r/ArtificialInteligence · 2h ago Cached

Five years into his tenure as Amazon CEO, Andy Jassy is aggressively investing in AI infrastructure, committing billions to partnerships with OpenAI and Anthropic while cutting costs and pleasing Wall Street, steering the company through what he calls its greatest challenge yet.

0 favorites 0 likes

Slash's AI Banker Can Now Move Money Without You. What Could Go Wrong?

Reddit r/ArtificialInteligence · 1h ago Cached

Slash Financial launches Twin, an AI agent that autonomously initiates payments from business accounts, raising liability and data control concerns as agentic commerce advances.

0 favorites 0 likes

I built a new type of AI tool; it generates 3D objects composed of their constituent parts (instead of the monolithic solid blobs all 3D AI generators produce).

Reddit r/ArtificialInteligence · 1h ago

A new AI tool generates 3D objects by generating code, resulting in objects with separate, functional parts rather than monolithic blobs. It is free and open-source on GitHub.

0 favorites 0 likes

Four student-founded AI companies win Cornell Tech Startup Awards

Reddit r/ArtificialInteligence · 2h ago Cached

Four student-founded AI startups won $100,000 investments at the Cornell Tech Startup Awards, addressing AI exam fraud, financial AI safety, medical device regulation, and automated contract reasoning.

0 favorites 0 likes

Greg Brockman Officially Takes Control of OpenAI’s Products in Latest Shake-Up

Reddit r/artificial · 1h ago Cached

OpenAI reorganizes, making cofounder Greg Brockman permanent head of product strategy and merging ChatGPT, Codex, and its API into a unified product team, as part of a broader leadership shake-up ahead of a potential IPO.

0 favorites 0 likes

Gemma4 26b MoE running in MLX with turboquant (and custom kernel)

Reddit r/LocalLLaMA · 2h ago

A developer successfully ran Gemma4 26b MoE on Apple MacBook Air M5 using MLX with turboquant and a custom kernel, achieving faster prompt processing and generation speeds than llama.cpp with lower memory usage. The implementation includes instructions for local deployment.

0 favorites 0 likes

Dynamically allocating compute budget to hard set of problems and evolving the sections with Qwen-35B-A3B gets you near GPT-5.4-xHigh on HLE

Reddit r/LocalLLaMA · 58m ago

A method that dynamically allocates compute budget to hard problems using Qwen-35B-A3B achieves performance near GPT-5.4-xHigh on the HLE benchmark.

0 favorites 0 likes

Orthrus-Qwen3-8B : up to 7.8×tokens/forward on Qwen3-8B, frozen backbone, provably identical output distribution

Reddit r/LocalLLaMA · 2h ago

Introduces Orthrus, a method that injects a trainable diffusion attention module into a frozen autoregressive transformer to achieve up to 7.8× tokens per forward pass and ~6× wall-clock speedup on MATH-500, with provably identical output distribution to the base Qwen3-8B model. The approach requires minimal additional parameters and training, and avoids the TTFT penalty of external drafters.

0 favorites 0 likes

Git Is Not Fine

Lobsters Hottest · 2h ago

The article critiques Git, arguing that it is not as fine as commonly perceived, and links to a discussion on Lobste.rs.

0 favorites 0 likes
← Previous
Next →
← Back to home

Submit Feedback