Jiunsong/supergemma4-26b-uncensored-gguf-v2

Hugging Face Models Trending Models

Summary

SuperGemma4-26B-Uncensored-Fast GGUF v2 is a quantized, locally-runnable variant of Google's Gemma-4-26B model optimized for Apple Silicon, offering faster inference speeds and less-censored chat behavior while maintaining practical performance on general tasks.

Task: text-generation Tags: gguf, gemma4, uncensored, fast, llama.cpp, apple-silicon, conversational, korean, coding, tool-use, text-generation, en, ko, base_model:google/gemma-4-26B-A4B-it, base_model:quantized:google/gemma-4-26B-A4B-it, license:gemma, endpoints_compatible, region:us
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 04/20/26, 02:45 PM

Jiunsong/supergemma4-26b-uncensored-gguf-v2 · Hugging Face

Source: https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2

https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2#supergemma4-26b-uncensored-fast-gguf-v2SuperGemma4-26B-Uncensored-Fast GGUF v2

The fast, uncensoredllama\.cppbuild of the strongestSuperGemmatext line.

This release is for people who want three things together:

  • a model that feels less censored than stock chat releases
  • a model that is more capable than the raw base on practical text workloads
  • a compact local GGUF that still serves quickly on Apple Silicon

https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2#why-this-buildWhy this build

  • Uncensored chat behavior without forcing every prompt into coding mode
  • Tuned from the strongestfastline instead of the raw base
  • Neutral chat template baked into the GGUF to reduce prompt-routing bugs
  • Verified on Apple Silicon with clean general-chat and coding responses

https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2#headline-numbersHeadline numbers

  • Base model:google/gemma\-4\-26B\-A4B\-it
  • Format:GGUF Q4\_K\_M
  • General Korean prompt speed:222\.0 tok/s
  • Generation speed:89\.4 tok/s
  • Derived from the verifiedSuperGemma FastMLX line

https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2#why-this-build-is-appealingWhy this build is appealing

  • Carries the strongerFastweights instead of the plain stock base
  • Keeps general chat natural instead of routing everything into coding mode
  • Preserves the uncensored release identity while staying useful on normal prompts
  • Gives you a practicalllama\.cppdeployment target without losing the personality of the tuned line

https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2#why-it-is-better-than-stockWhy it is better than stock

  • Inherits theFastline improvements over the original local baseline:- Quick bench overall:95\.8vs91\.4 - Faster average generation on the MLX reference run:46\.2 tok/svs42\.5 tok/s - Higher scores in code, logic, browser workflows, and Korean
  • Ships with a neutral embedded template to avoid the older routing bug where simple questions drifted into coding/tool-call behavior

https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2#included-fileIncluded file

  • supergemma4\-26b\-uncensored\-fast\-v2\-Q4\_K\_M\.gguf

https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2#quick-local-checksQuick local checks

Tested on Apple M4 Max withllama\.cpp:

  • General Korean prompt:봄에 먹기 좋은 한식 반찬 5개 추천- Prompt speed:222\.0 tok/s - Generation speed:89\.4 tok/s - Output stayed in normal Korean assistant mode
  • Code prompt:파이썬으로 피보나치 함수를 짧게 작성해줘- Prompt speed:704\.9 tok/s - Generation speed:89\.4 tok/s - Output returned concise Python code correctly

https://huggingface.co/Jiunsong/supergemma4-26b-uncensored-gguf-v2#notesNotes

  • This GGUF is exported from thesupergemma4\-26b\-uncensored\-fast\-v2MLX line.
  • Gemma 4 MoE expert tensors were converted with a patched local converter so GGUF export works correctly.
  • A neutral template is embedded to avoid the old issue where general prompts were pushed into coding/tool-call behavior.

Similar Articles

Jiunsong/supergemma4-26b-uncensored-mlx-4bit-v2

Hugging Face Models Trending

SuperGemma4-26B-Uncensored-MLX-4bit-v2 is a fine-tuned and quantized variant of Google's Gemma 4 26B optimized for Apple Silicon, offering improved performance on code, reasoning, and tool-use tasks while maintaining faster inference speeds compared to the stock baseline.

unsloth/gemma-4-26B-A4B-it-GGUF

Hugging Face Models Trending

Unsloth releases GGUF-quantized versions of Google DeepMind's Gemma 4 26B A4B instruction-tuned model, enabling efficient local inference with support for tool-calling and fine-tuning via Unsloth Studio. Gemma 4 is a multimodal MoE model with a 256K context window, supporting text, image, video, and audio inputs.

HauhauCS/Gemma-4-E4B-Uncensored-HauhauCS-Aggressive

Hugging Face Models Trending

HauhauCS releases an uncensored variant of Google's Gemma-4-E4B model with aggressive safety removal, featuring custom K_P quantizations optimized for quality preservation and broader hardware compatibility.

Gemma 4 26B-A4B GGUF Benchmarks

Reddit r/LocalLLaMA

Unsloth has released KL Divergence benchmarks for Gemma 4 26B-A4B GGUF quantizations, showing Unsloth GGUFs top 21 of 22 sizes on the Pareto frontier. They also introduced a new UD-IQ4_NL_XL quant fitting in 16GB VRAM and updated Q6_K and MLX quants for both Gemma 4 and Qwen3.6.

4GB "Gemini Nano" model GGUF anyone?

Reddit r/LocalLLaMA

A user inquires about the specific identity of a ~4GB AI model (likely Gemini Nano) silently downloaded by Chrome for on-device features, and requests a GGUF version for local execution via llama.cpp.