Sipeed's K3 RISC-V SBCs can run 30B-parameter LLMs 60 TOPS (INT4), Supports BF16/FP16/INT4

Reddit r/LocalLLaMA Products

Summary

Sipeed's new K3 RISC-V single-board computers feature 32GB LPDDR5 and a 60 TOPS NPU, enabling local inference of large language models at up to 15 tokens per second.

[https://wccftech.com/sipeed-crams-32gb-lpddr5-60-tops-npu-compact-risc-v-board-hits-15-tokens-s-ai-llms/](https://wccftech.com/sipeed-crams-32gb-lpddr5-60-tops-npu-compact-risc-v-board-hits-15-tokens-s-ai-llms/)
Original Article

Similar Articles

Running Qwen3.6 35b a3b on 8gb vram and 32gb ram ~190k context

Reddit r/LocalLLaMA

The author shares a high-performance local inference configuration for running Qwen3.6 35B A3B on limited hardware (8GB VRAM, 32GB RAM) using a modified llama.cpp with TurboQuant support, achieving ~37-51 tok/sec with ~190k context.