llama-server

Tag

Cards List
#llama-server

Llama-Studio, WebUI for llama-server Management

Reddit r/LocalLLaMA · 20h ago

Llama-Studio is a WebUI for managing llama-server sessions, allowing configuration, monitoring, and control of multiple instances for local development and experimentation.

0 favorites 0 likes
#llama-server

@ggerganov: llama-server -hf ggml-org/Qwen3.6-27B-GGUF --spec-default

X AI KOLs Following · 2026-04-22 Cached

Georgi Gerganov shared a one-liner to launch the quantized 27B Qwen3.6 model with llama-server using default speculative-decoding settings.

0 favorites 0 likes
#llama-server

Qwen3.6 35B MoE on 8GB VRAM — working llama-server config + a max_tokens / thinking trap I ran into

Reddit r/LocalLLaMA · 2026-04-21

Author shares a working llama-server config to run the 35B-MoE Qwen3.6 model on an 8GB RTX 4060, highlighting a max_tokens trap caused by unconstrained internal reasoning and the fix using per-request thinking_budget_tokens.

0 favorites 0 likes
← Back to home

Submit Feedback