macbook-pro

Tag

Cards List
#macbook-pro

@rohanpaul_ai: Qwen 3.6 27B on a MacBook Pro M5 Max 64GB hitting 34tokens per sec, locally with atomic[.]chat 90% acceptance rate, i.e…

X AI KOLs Following · yesterday

Qwen 3.6 27B achieves 34 tokens/sec on a MacBook Pro M5 Max 64GB locally with 90% draft acceptance, enabled by TurboQuant, GGUF, and llama.cpp, showcasing a major advancement in laptop-based AI inference.

0 favorites 0 likes
#macbook-pro

@Daniel_Farinax: Qwen3.6-27B on MacBook Pro M5 128GB. Third version of the game. This time a low-poly GTA, built overnight using a custo…

X AI KOLs Timeline · 2d ago

Daniel Farinax demonstrates running Qwen3.6-27B on a MacBook Pro M5 128GB, using a custom Rust CLI (MPTLX) to build a low-poly GTA game overnight, claiming blazing fast performance comparable to Claude 4.6 running locally.

0 favorites 0 likes
#macbook-pro

@antirez: Announcing with gratitude that @audreyt just gifted me an M5 Max 128GB MacBook Pro! It will let me develop DwarfStar4 (…

X AI KOLs Timeline · 2d ago

antirez announces receiving an M5 Max 128GB MacBook Pro from audreyt to develop DwarfStar4 and experiment with distributed inference across M3 Max and M5 Max hardware.

0 favorites 0 likes
#macbook-pro

Localmaxxing (3 minute read)

TLDR AI · 3d ago Cached

The article analyzes the viability of running AI inference locally on a MacBook Pro, comparing a local Qwen 35B model against the cloud-based Claude Opus 4.5. It concludes that local models are 2x faster for routine tasks, making them a practical choice for half of daily workloads despite a slight capability gap.

0 favorites 0 likes
#macbook-pro

@remilouf: Following @julien_c’s tweet I bought a MacBook Pro with 128B unified memory, and started running Qwen3.6 as my daily dr…

X AI KOLs Following · 4d ago Cached

The author shares their experience running the Qwen3.6 model on a MacBook Pro with 128GB of unified memory, praising Apple's hardware efficiency for local AI inference.

0 favorites 0 likes
#macbook-pro

@PandaTalk8: These test results are stunning. The original poster tested the DS4 inference engine written in C by @antirez, and local deployment seems incredibly fast. The good news is that only 128GB of RAM is needed to run a local model equivalent to GPT-4o. The bad news is that you need a MacBook Pro with 128GB of RAM.

X AI KOLs Timeline · 5d ago Cached

This article reports on tests of the DS4 inference engine written in C by @antirez, noting its impressive speed when running a GPT-4o-equivalent model on a MacBook Pro with 128GB of RAM.

0 favorites 0 likes
← Back to home

Submit Feedback