@AdinaYakup: Ovis2.6-80B-A3B > new MoE multimodal LLM from Alibaba's AIDC team 80B/3B active Apache2.0 64K context / 2880×2880 image…
Summary
Alibaba's AIDC team has released Ovis2.6-80B-A3B, an Apache 2.0 licensed Mixture of Experts multimodal LLM featuring 80B total parameters with 3B active, 64K context length, and native support for 2880×2880 images with Chain-of-Thought visual reasoning.
View Cached Full Text
Cached at: 05/12/26, 12:53 PM
Ovis2.6-80B-A3B > new MoE multimodal LLM from Alibaba’s AIDC team
✨ 80B/3B active ✨ Apache2.0 ✨ 64K context / 2880×2880 image resolution ✨ “Think with Image” : active visual reasoning in CoT https://t.co/08FpVf0aDd
Similar Articles
AIDC-AI/Ovis2.6-80B-A3B · Hugging Face
Ovis2.6-80B-A3B is a new Multimodal Large Language Model released by AIDC-AI, featuring a Mixture-of-Experts architecture with 80B total parameters but only 3B active during inference. It offers enhanced long-context processing, high-resolution understanding, and active visual reasoning capabilities.
@AdinaYakup: MOSS-VL Vision model from @Open_MOSS Model: https://huggingface.co/collections/OpenMOSS-Team/moss-vl… Demo: https://hug…
Open_MOSS released MOSS-VL, an 11B Apache 2.0 vision-language model using cross-attention and XRoPE that outperforms Qwen3-VL-8B by 8.3 points on VSI-bench.
@techNmak: A lightweight VLM that beats the giants at OCR. (1.7B parameters, SOTA on OmniDocBench) dots. ocr is a new multilingual…
dots.ocr is a new lightweight 1.7B parameter multilingual vision-language model that achieves state-of-the-art performance on OmniDocBench, outperforming much larger models (72B+) at document parsing and OCR tasks.
@DivyanshT91162: Local LLMs just hit a whole new level This Hugging Face release is actually insane: "gpt-oss-20b-tq3" An official 20B+ …
A new 20B+ parameter MoE model from OpenAI, quantized to 3-bit via TurboQuant and optimized with MLX, allows for high-performance local LLM inference on standard 16GB MacBooks.
@AdinaYakup: Intern S2 preview A scientific multimodal model from Shanghai AI Lab @intern_lm 35B matches their own 1T model on scien…
Shanghai AI Lab releases Intern S2, a 35B scientific multimodal model that matches their own 1T model on science benchmarks, introducing Task Scaling as a new scaling dimension. Licensed under Apache 2.0.