@AdinaYakup: Intern S2 preview A scientific multimodal model from Shanghai AI Lab @intern_lm 35B matches their own 1T model on scien…
Summary
Shanghai AI Lab releases Intern S2, a 35B scientific multimodal model that matches their own 1T model on science benchmarks, introducing Task Scaling as a new scaling dimension. Licensed under Apache 2.0.
Similar Articles
internlm/Intern-S2-Preview · Hugging Face
InternLM releases Intern-S2-Preview, a 35B scientific multimodal foundation model that achieves performance comparable to trillion-scale models on professional scientific tasks through task scaling and a full-chain training pipeline.
@AnandButani: ml-intern by @huggingface is wild You drop a high-level prompt (“build the best scientific reasoning model” or “crush h…
Hugging Face’s open-source "ml-intern" agent automates the full post-training pipeline—from literature review and data cleaning to model tuning—given only a high-level prompt.
ml-intern
Hugging Face launches ML-Intern, an AI agent that automates post-training tasks for machine-learning workflows.
@AdinaYakup: Ovis2.6-80B-A3B > new MoE multimodal LLM from Alibaba's AIDC team 80B/3B active Apache2.0 64K context / 2880×2880 image…
Alibaba's AIDC team has released Ovis2.6-80B-A3B, an Apache 2.0 licensed Mixture of Experts multimodal LLM featuring 80B total parameters with 3B active, 64K context length, and native support for 2880×2880 images with Chain-of-Thought visual reasoning.
@cmpatino_: I’ve been using ml-intern for a while, and it genuinely changed my workflow. It's super good at: - Model/Dataset discov…
Developer praises ml-intern tool for streamlining model/dataset discovery, post-training iteration and data workflows.