ggml

Tag

Cards List
#ggml

ggml-cpu: Optimized x86 and generic cpu q1_0 dot (follow up) by pl752 · Pull Request #21636 · ggml-org/llama.cpp

Reddit r/LocalLLaMA · 2026-04-21 Cached

Pull request adds optimized x86 and generic CPU q1_0 dot-product kernels to ggml-cpu, improving quantized LLM inference speed.

0 favorites 0 likes
#ggml

GGML and llama.cpp join HF to ensure the long-term progress of Local AI

Hugging Face Blog · 2026-02-20 Cached

GGML and llama.cpp have joined Hugging Face to ensure long-term sustainability of local AI development. Georgi Gerganov's team will maintain full autonomy over the projects while receiving resources to scale community support and improve integration between llama.cpp inference and transformers model definitions.

0 favorites 0 likes
← Back to home

Submit Feedback