Tag
The author ran 55 inference benchmark runs across Strix Halo, RTX 3090, and RTX 5070 with multiple backends, revealing that memory bandwidth dominates decode speed, the RTX 5070 beats the 3090 on small models, and reasoning models appear ~5x slower due to hidden reasoning content.
A comparison of DGX Spark vs Mac Studio M5 Max for running local LLMs, highlighting decode speed, prefill performance, RAM, power consumption, and cost. The Mac wins on decode bandwidth but DGX is faster for prefill and supports batching.
A user seeks recommendations on choosing between AMD Strix Halo and Nvidia DGX Spark hardware for setting up a local network-accessible LLM server.
The author asks about the current viability of AMD's ROCm ecosystem for AI training in mid-2026, comparing it to NVIDIA's CUDA and asking if it has reached a 'just works' stage for PyTorch.