@Merocle: M5 Max cluster 72 CPU and 128 GPU cores, 512GB unified Ram Each MacBook is connected to all the others with Thunderbolt…
Summary
A user showcases a DIY cluster of M5 Max MacBooks connected via Thunderbolt 5, highlighting the aggregate compute power and connectivity challenges.
View Cached Full Text
Cached at: 05/12/26, 08:52 AM
M5 Max cluster 72 CPU and 128 GPU cores, 512GB unified Ram Each MacBook is connected to all the others with Thunderbolt 5 (120Gbit/s). But I’ll have to use Wi-Fi to connect to the cluster https://t.co/Y6JLmceurP
Similar Articles
2x 512gb ram M3 Ultra mac studios
A user shares their $25k hardware setup of two 512GB RAM M3 Ultra Mac Studios for running large language models locally, having tested DeepSeek V3 Q8 and GLM 5.1 Q4 via the exo distributed inference backend, while awaiting Kimi 2.6 MLX optimization.
@alexocheema: Running Qwen3.6 35B (vision) on 2 x M5 Max MacBook Pro with RDMA over Thunderbolt 5. It describes the image and identif…
A demo shows Qwen3.6 35B vision model running across two M5 Max MacBook Pros connected via RDMA over Thunderbolt 5, achieving near-instant responses with prefix caching. The model correctly identifies Apple Park but misidentifies a person in the image.
@antirez: Announcing with gratitude that @audreyt just gifted me an M5 Max 128GB MacBook Pro! It will let me develop DwarfStar4 (…
antirez announces receiving an M5 Max 128GB MacBook Pro from audreyt to develop DwarfStar4 and experiment with distributed inference across M3 Max and M5 Max hardware.
@Prince_Canuma: My home compute for MLX and research: • M3 Ultra — 512GB (sponsored by community + @wai_protocol) • RTX PRO 6000 — 96GB…
A researcher shares their home compute setup for MLX and AI research, featuring M3 Ultra with 512GB, RTX PRO 6000 with 96GB, and M3 Max with 96GB for model porting and stress testing.
@zcbenz: MLX's implementation of RDMA (Remote Direct Memory Access) over Thunderbolt on macOS, can now be used as an independent…
MLX's RDMA-over-Thunderbolt implementation for macOS is now available as a standalone library, enabling high-speed Mac clusters for local AI workloads.