Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF

Hugging Face Models Trending Models

Summary

This entry describes Qwen3.5-9B-DeepSeek-V4-Flash, a distilled AI model that transfers reasoning capabilities from DeepSeek-V4 into a smaller 9B parameter space for efficient inference.

Task: image-text-to-text Tags: transformers, gguf, text-generation-inference, unsloth, qwen3_5, reasoning, distillation, deepseek, deepseek-v4, sft, long-cot, chain-of-thought, efficient-inference, agent, multilingual, image-text-to-text, en, zh, ko, ja, es, ru, dataset:Jackrong/DeepSeek-V4-Distill-8000x, arxiv:2604.06628, base_model:unsloth/Qwen3.5-9B, base_model:quantized:unsloth/Qwen3.5-9B, license:apache-2.0, endpoints_compatible, region:us, conversational
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/08/26, 09:07 AM

Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF · Hugging Face

Source: https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%8C%9F-qwen35-9b-deepseek-v4-flash🌟 Qwen3.5-9B-DeepSeek-V4-Flash

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%92%A1-model-overview–design💡 Model Overview & Design

ChatGPT Image Apr 24, 2026 at 04_32_09 PM

Qwen3.5-9B-DeepSeek-V4-Flashis an efficient reasoning model distilled using high-quality data fromDeepSeek-V4.

  • By leveraging the datasetJackrong/DeepSeek-V4-Distill-8000x, this model successfully transfers the advanced structured reasoning and multi-step problem-solving capabilities of the DeepSeek-V4 architecture into the highly efficientQwen3.5-9Bparameter space.
  • This model was trained in anUnslothenvironment, prioritizing stable gradient propagation and rigorous data curation to ensure the distillation process avoids merely learning “hollow chain-of-thought” and instead captures genuine logical generalization.

Designed for:

  • 🧩Structured Reasoning: Inheriting DeepSeek-V4’s deep logic capabilities.
  • Flash Inference: Maintaining the token-efficiency and speed of the 9B parameter size.
  • 🔧Tool-augmented Workflows: Reliable agentic action generation.

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%8D%8E-about-the-teacher-model-deepseek-v4🍎 About the Teacher Model: DeepSeek-V4

dsv4_performance

**DeepSeek-V4**is the latest flagship open-source model series from DeepSeek, engineered for extreme efficiency, million-token long context (1M), and advanced Agentic workflows. As the source for this distillation, DeepSeek-V4 provides the high-fidelity reasoning signals necessary to push a 9B model beyond its architectural limits.

Key Technical Strengths of the Teacher Model:

  • **🏆 World-Class Reasoning & Coding:**DeepSeek-V4 demonstrates elite performance in mathematics (MATH-500), STEM subjects, and real-world software engineering (SWE-bench). Its “Think” modes provide the sophisticated Long-CoT (Chain-of-Thought) traces that define this model’s logic.
  • 🧠 Architectural Innovation:***Hybrid Attention & DSA:**Features Token-level compression and DeepSeek Sparse Attention, which reduces KV Cache memory overhead by up to 90%, allowing for highly efficient long-context processing.- **Engram Memory & mHC:**Utilizes Manifold-constrained Hyper-connections to decouple factual knowledge retrieval from dynamic logical reasoning, ensuring exceptional stability and generalization.
  • **🤖 Agent-Centric Design:**Specifically optimized for multi-step tool calling and complex environment interaction, ensuring that the distilled knowledge includes reliable “how-to-act” procedures, not just “how-to-talk.”

By distilling fromDeepSeek-V4-Flash, we have successfully mapped the high-density logic of a trillion-parameter class model onto the agile and high-speedQwen3.5-9Bframework.


https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%A4%9D-collaboration–training-details🤝 Collaboration & Training Details

This model is the result of a close collaboration with hardware engineerKyle Hessling. He generously provided the crucial compute equipment and managed both the rigorous post-training testing and continuous server maintenance. I want to express my gratitude to Kyle for his invaluable support! You can find him on X/Twitter here:@KyleHessling1

Training Infrastructure & Configuration:

  • 🖥️**Hardware:**NVIDIA DGX
  • 💾**Training Data:**DeepSeek-V4-Distill-8000x
  • 🧪**Training Method:**Distillation

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%8E%AF-motivation–distillation-insights🎯 Motivation & Distillation Insights

  • 🧠Latent Knowledge Activation: DeepSeek-V4’s reasoning traces help the Qwen3.5-9B model activate its existing latent knowledge more effectively.
  • 🏗️Learning Procedures: The model learns actual problem-solving procedures, not just the output format.
  • 🚀Efficiency: The 8000x dataset provides a dense signal, allowing the 9B model to converge on reasoning tasks much faster than traditional large-scale SFT.

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%93%8A-evaluation📊 Evaluation

This is an early controlledQ5_K_Mcomparison betweenJackrong/Qwen3.5-9B-DeepSeek-V4-Flashand the officialQwen3.5-9Bbase model. This evaluation was completed byKyle Hessling, who ran the same evaluation suite twice under the same local inference conditions: once on the DeepSeek-V4 distill model and once on the official Qwen3.5-9B base model.

Evaluation Report

Comparison Method

Agentic Reasoning Results

Front-end Design Results

Tool Calling Results

Evaluation Setup


https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%94%AC-supporting-evidence🔬 Supporting Evidence

Recent work and empirical tests support this distillation approach:

Ren et al., 2026 —Rethinking Generalization in Reasoning SFT(arXiv:2604.06628)

The paper suggests that generalization in reasoning SFT is conditional. Key takeaways:

  • High-quality long-CoT datafrom DeepSeek-V4 enables cross-domain transfer.
  • Optimization Discipline: Short, highly-curated distillation (8000 examples) prevents the model from overfitting to the teacher’s stylistic quirks while preserving the core reasoning engine.

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%9B%A0%EF%B8%8F-best-practices🛠️ Best Practices

For optimal performance, we recommend the following generation parameters:

  • temperature=0\.7to1\.0(Use lower temperature for strict coding tasks, higher for creative reasoning)
  • top\_p=0\.95

When interacting with the model, using a structured prompt template or standard ChatML format will yield the best reasoning results.


https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%93%9A-resources–guides📚 Resources & Guides

👉**GitHub Repository: Jackrong-llm-finetuning-guide**Visit the repository to dive into the codebase and reproduce the results locally or on Colab.

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%93%A5-core-technical-document📥 Core Technical Document

🔗Complete Fine-Tuning Guide (PDF)

A Note:My goal isn’t just to detail a workflow, but to demystify LLM training. Beyond the social media hype, fine-tuning isn’t an unattainable ritual—often, all you need is a Google account, a standard laptop, and relentless curiosity. All training and testing for this project were self-funded. If you find this model or guide helpful, aStar ⭐️ on GitHubwould be the greatest encouragement. Thank you! 🙏


https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%E2%9A%A0%EF%B8%8F-limitations⚠️ Limitations

  • Parameter Constraints: While enhanced by DeepSeek-V4 distillation, the model is still bound by the 9B parameter limits and may struggle with extremely obscure knowledge.
  • Over-reasoning: On very simple queries, the model might still attempt to produce a lengthy reasoning chain due to the SFT bias.
  • Safety Trade-offs: Asymmetric gains mean that while reasoning improves, certain alignment-sensitive behaviors might regress.

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%99%8F-acknowledgements🙏 Acknowledgements

Special thanks to:

  • DeepSeek Teamfor the foundational advancements in the V4 architecture.
  • Unslothfor efficient fine-tuning frameworks.
  • Open-source datasets and community contributors.
  • Researchers exploring reasoning SFT and distillation.

https://huggingface.co/Jackrong/Qwen3.5-9B-DeepSeek-V4-Flash-GGUF#%F0%9F%93%96-citation📖 Citation

@misc{jackrong_qwen35_9b_deepseek_v4_flash,
  title        = {Qwen3.5-9B-DeepSeek-V4-Flash},
  author       = {Jackrong},
  year         = {2026},
  publisher    = {Hugging Face}
}

Similar Articles

Jackrong/Qwopus3.6-35B-A3B-v1-GGUF

Hugging Face Models Trending

Jackrong releases Qwopus3.6-35B-A3B-v1, a reasoning-enhanced fine-tune of Alibaba's Qwen3.6 MoE model, optimized for logic and agentic coding with 35B total parameters and 3B active parameters.

Jackrong/Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled

Hugging Face Models Trending

Jackrong releases Qwen3.5-27B-Claude-4.6-Opus-Reasoning-Distilled, a fine-tuned 27B parameter model with improved reasoning capabilities and stability, along with comprehensive training guides and code on GitHub using the Unsloth framework.

Qwen/Qwen3.6-27B-FP8

Hugging Face Models Trending

Alibaba releases Qwen3.6-27B-FP8, a 27B FP8-quantized model with strong agentic coding and reasoning benchmarks, now available on Hugging Face.

Qwen/Qwen3.6-35B-A3B

Hugging Face Models Trending

Qwen releases Qwen3.6-35B-A3B, an open-weight Mixture-of-Experts model with 35B total parameters and 3B active parameters, featuring significant improvements in agentic coding and reasoning preservation.