Tag
A user reports that their Asus Ascent with Nvidia GB10 (DGX) is slower than their Ryzen AI Max when running LLMs like Gemma4-31B, despite expected 2-4x speedup, and shares their llama-cpp configuration for debugging.
A user shares a hands-on comparison of running Gemma 4 with LiteRT-LM on mobile devices versus their previous llama.cpp setup, noting significantly better memory usage (1.5-2 GB vs 4-5 GB) and faster inference (2-4 seconds vs 7-10 seconds) on smartphones like Samsung S25 Ultra and iPhone 13 Pro Max.
An AI coding contest compares Claude and Gemini on a weighted knight's tour problem variant where the cost of each move depends on accumulated load from visited squares.