nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16

Hugging Face Models Trending Models

Summary

NVIDIA releases Nemotron 3 Nano Omni, a 30B parameter multimodal model capable of processing video, audio, image, and text with integrated reasoning capabilities for enterprise workflows.

Task: any-to-any Tags: transformers, safetensors, NemotronH_Nano_Omni_Reasoning_V3, feature-extraction, nvidia, pytorch, multimodal, any-to-any, custom_code, dataset:nvidia/Nemotron-Image-Training-v3, arxiv:2604.24954, license:other, deploy:azure, region:us
Original Article Export to Word Export to PDF
View Cached Full Text

Cached at: 05/08/26, 09:01 AM

nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 · Hugging Face

Source: https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#at-a-glanceAt a Glance

Total parameters31B (Mamba2-Transformer hybrid MoE)Active parameters~3B per tokenMax context256k tokensModalities (in)Video, Audio, Image, TextModality (out)TextReasoning modeOn by default; toggle viaenable\_thinkingBest forVideo+speech analysis, document intelligence (OCR/charts/long docs), GUI/agentic workflows, ASRMinimum GPU (BF16)1× H100 80GB (single-GPU); 1× B200 / 1× H200 recommendedMinimum GPU (FP8)1× L40S 48GB; 1× RTX Pro 6000 / 1× B200 recommendedMinimum GPU (NVFP4)1× RTX 5090 32GB; 1× DGX Spark / 1× Jetson Thor also supportedPrecisionsBF16(62 GB) ·FP8(33 GB) ·NVFP4(21 GB)

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#quick-start-guideQuick Start Guide

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#model-parametersModel Parameters

Modetemperaturetop_ptop_kmax_tokensreasoning_budgetgrace_periodThinking mode0.60.95—20480163841024Instruct mode0.2—11024——

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#model-overviewModel Overview

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#descriptionDescription:

NVIDIA Nemotron 3 Nano Omni is a multimodal large language model that unifies video, audio, image, and text understanding to support enterprise-grade Q&A, summarization, transcription, and document intelligence workflows. It extends the Nemotron Nano family with integrated video+speech comprehension, Graphical User Interface (GUI), Optical Character Recognition (OCR), and speech transcription capabilities, enabling end-to-end processing of rich enterprise content such as meeting recordings, M&E assets, training videos, and complex business documents. NVIDIA Nemotron 3 Nano Omni was developed by NVIDIA as part of the Nemotron model family.

This model is available for commercial use.

This model was improved using Qwen3-VL-30B-A3B-Instruct, Qwen3.5-122B-A10B, Qwen3.5-397B-A17B, Qwen2.5-VL-72B-Instruct, and gpt-oss-120b. For more information, please see the Training Dataset section below.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#licenseterms-of-useLicense/Terms of Use

Governing Terms: Use of this model is governed by theNVIDIA Open Model Agreement

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#deployment-geographyDeployment Geography:

Global

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#use-case-Use Case:

This model is designed for enterprise customers requiring multimodal understanding capabilities. Expected users include:

  • Customer service applications (e.g., Doordash video of drop-off at a given address via OCR, drive-thru order verification)
  • Media and Entertainment (M&E) — video and speech analysis, dense captions, video search and summarization
  • Document intelligence for AI assistants (contracts, SOW/MSA, scientific discovery, financial documents)
  • GUI automation for AI agentic applications (incident management, agentic search, browser agents, email agents)

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#release-date-Release Date:

Build.Nvidia.com 04/28/2026 viaURL Hugging Face 04/28/2026 via:

NGC 04/28/2026 viaURL

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#model-architectureModel Architecture:

**Architecture Type:**Mamba2-Transformer Hybrid Mixture of Experts (MoE)

Network Architecture:

**Number of model parameters:**3.1 x 10^10 (31B A3B)

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#inputs-Input(s):

**Input Type(s):**Video, Audio, Image, Text

Input Format(s):

  • Video: mp4, up to 2 minutes. For 1080p videos, sample up to 1 FPS / 128 frames. For lower-resolution videos such as 720p, higher temporal sampling such as 2 FPS / 256 frames may be used.
  • Audio: wav, mp3 files (up to 1 hour), 8kHz and higher sampling rates
  • Image: Red, Green, Blue (RGB) (jpeg, png)
  • Text: String

Input Parameters:

  • Video: Three-Dimensional (3D)
  • Audio: One-Dimensional (1D)
  • Image: Two-Dimensional (2D)
  • Text: One-Dimensional (1D)

Other Properties Related to Input:

  • Maximum context length up to 256k tokens
  • Language support: English only

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#outputsOutput(s)

**Output Type(s):**Text

Output Format(s):

  • Text: String

Output Parameters:

  • Text: One-Dimensional (1D)

Other Properties Related to Output:

  • Maximum context length up to 256k tokens.
  • Supports JSON output format
  • Supports reasoning output with chain-of-thought
  • Supports tool calling
  • Supports word-level timestamps for transcription

Our AI models are designed and/or optimized to run on NVIDIA GPU-accelerated systems. By leveraging NVIDIA’s hardware (e.g. GPU cores) and software frameworks (e.g., CUDA libraries), the model achieves faster training and inference times compared to CPU-only solutions.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#software-integrationSoftware Integration:

Runtime Engine(s):

  • vLLM
  • NeMo
  • Megatron
  • NeMo-RL

Supported Hardware Microarchitecture Compatibility:

  • NVIDIA Ampere (A100 80GB SXM/NVLink)
  • NVIDIA Blackwell (B200 SXM/NVLink, RTX Pro 6000 SE, DGX Spark, Jetson Thor, RTX 5090)
  • NVIDIA Hopper (H100 SXM/NVLink, H200 SXM/NVLink)
  • NVIDIA Lovelace (L40S)

Preferred/Supported Operating System(s):

  • Linux

Inference Runtimes:

  • vLLM
  • TensorRT LLM
  • TensorRT Edge-LLM
  • llama.cpp
  • Ollama
  • SGLang

The integration of foundation and fine-tuned models into AI systems requires additional testing using use-case-specific data to ensure safe and effective deployment. Following the V-model methodology, iterative testing and validation at both unit and system levels are essential to mitigate risks, meet technical and functional requirements, and ensure compliance with safety and ethical standards before deployment.

This AI model can be embedded as an Application Programming Interface (API) call into the software environment described above.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#model-versionsModel Version(s):

Nemotron-3-Nano-Omni-30B-A3B-Reasoning


https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#download-model-weightsDownload Model Weights

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#install-the-huggingface-cliInstall the HuggingFace CLI

pip install -U "huggingface_hub[hf_xet]"
 
# Log in once; the token is cached at ~/.cache/huggingface/token
hf auth login
 
# Sanity check: should print your username and orgs
hf auth whoami

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#vllmvLLM

Required version:vLLM0.20.0is needed. This means one of these containers: - CUDA 13.0:‘vllm/vllm-openai:v0.20.0’ - CUDA 12.9:‘vllm/vllm-openai:v0.20.0-cu129’

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#containerContainer

docker pull vllm/vllm-openai:v0.20.0

**Audio support:**Within the vLLM container, before runningvllm serve, ifanyaudio will be used (including passinguse\_audio\_in\_video: true): python3 -m pip install "vllm[audio]"

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#general-invocation-1%C3%97gpu-eg-1%C3%97b200General Invocation (1×GPU, e.g. 1×B200)

# vllm serve nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16 \
# vllm serve nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-FP8 \
vllm serve nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4 \
  --host 0.0.0.0 \
  --max-model-len 131072 \
  --tensor-parallel-size 1 \
  --trust-remote-code \
  --video-pruning-rate 0.5 \
  --max-num-seqs 384 \
  --allowed-local-media-path / \
  --media-io-kwargs '{"video": {"fps": 2, "num_frames": 256}}' \
  --reasoning-parser nemotron_v3 \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_coder \
  --kv-cache-dtype fp8 # Omit this for BF16

Efficient Video Sampling: video-pruning-rate=0.5 drops 50% of redundant video tokens; halves video-prefill VRAM/TTFT.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#platform-specific-notesPlatform-Specific Notes

**RTX Pro:**Due to a current bug with FlashInfer + RTX Pro, append:\-\-moe\-backend triton

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#vllm-on-dgx-spark-aarch64–arm64vLLM on DGX Spark (aarch64 / ARM64)

For everything not covered here (API examples, reasoning mode, video tuning), follow the general instructions.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#1-pull-the-container-image1. Pull the container image

Use the upstream multi-archvLLM v0.20.0docker image. Docker will automatically pull thearm64variant.

docker pull vllm/vllm-openai:v0.20.0
https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#2-launch-the-vllm-server-on-spark2. Launch the vLLM server on Spark
WEIGHTS=/path/to/nemotron-3-nano-omni-weights

# The image does not include audio packages so we need to install them with "pip install vllm[audio]" as done in the command below
docker run --rm -it \
  --gpus all \
  --ipc=host -p 8000:8000 \
  --shm-size=16g \
  --name vllm-nemotron-omni \
  -v "${WEIGHTS}:/model:ro" \
  --entrypoint /bin/bash \
  vllm/vllm-openai:v0.20.0 -c  \
  "pip install vllm[audio] && vllm serve /model \
  --served-model-name=nemotron_3_nano_omni \
  --max-num-seqs 8 \
  --max-model-len 131072 \
  --port 8000 \
  --trust-remote-code \
  --gpu-memory-utilization 0.8 \
  --limit-mm-per-prompt '{\"video\": 1, \"image\": 1, \"audio\": 1}' \
  --media-io-kwargs '{\"video\": {\"fps\": 2,  \"num_frames\": 256}}' \
  --allowed-local-media-path=/ \
  --enable-prefix-caching \
  --max-num-batched-tokens 32768 \
  --reasoning-parser nemotron_v3 \
  --enable-auto-tool-choice \
  --tool-call-parser qwen3_coder"

In another terminal, verify the server is ready:

curl -sS http://localhost:8000/v1/models | python3 -m json.tool
https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#key-spark-specific-flagsKey Spark-Specific Flags

FlagPurposeSpark Guidance\-\-gpus allSelect GPUSpark has one GB10 GPU;allis equivalent todevice=0``\-\-max\-model\-lenMax context windowStart at 131072; reduce if you hit OOM (see Memory Tuning below)

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#memory-tuning-on-sparkMemory Tuning on Spark

Spark usesunified LPDDR5X memory(~128 GB shared between CPU and GPU), not separate system + VRAM pools. Two levers, in order of impact:

  1. **Lower\-\-gpu\-memory\-utilization**from 0.85 → 0.70 to free ~19 GB back to the OS and re-enable weight prefetch. Cost: smaller KV cache budget.
  2. **Lower\-\-max\-model\-len**to reduce KV cache allocation (e.g. halving context window halves KV cache at\-\-max\-num\-seqs=1). Combined override:
--gpu-memory-utilization=0.70 \
  --max-model-len=32768 \

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#tensorrt-llmTensorRT-LLM

This model can also be deployed with TensorRT-LLM - see relevantinstructions here.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#platform-specific-notes-1Platform-Specific Notes

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#tensorrt-edge-llmTensorRT Edge-LLM

This model can also be deployed with TensorRT Edge-LLM on NVIDIA Jetson Thor - see theJetson AI Lab model pageand theTensorRT Edge-LLM Quick Start Guide.


https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#sglangSGLang

The BF16 variant of this model is supported on SGLang, with the following images:

librosamust be installed first:pip install librosa \-\-break\-system\-packages

To serve:sglang serve \-\-model\-path nvidia/Nemotron\-3\-Nano\-Omni\-30B\-A3B\-Reasoning\-BF16 \-\-trust\-remote\-code

NVFP4 and FP8 support to come.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#platform-specific-notes-2Platform-Specific Notes

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#sglang-on-dgx-spark-aarch64–arm64SGLang on DGX Spark (aarch64 / ARM64)

For everything not covered here (API examples, reasoning mode, video tuning), follow the general instructions.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#1-pull-the-container-image-11. Pull the container image

Use the upstream multi-archCUDA 13.0docker image linked above. Docker will automatically pull thearm64variant.

docker pull lmsysorg/sglang:dev-cu13-nemotronh-nano-omni-reasoning-v3
https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#2-launch-the-sglang-server-on-spark2. Launch the SGLang server on Spark
WEIGHTS=/path/to/nemotron-3-nano-omni-weights

# The image does not include audio packages so we need to install them with "pip install librosa" as done in the command below
docker run --gpus all -it --rm \
  -p 30000:30000 \
  -v "${WEIGHTS}:/model:ro" \
  --shm-size 16g \
  lmsysorg/sglang:dev-cu13-nemotronh-nano-omni-reasoning-v3 \
  bash -c "pip install librosa && python3 -m sglang.launch_server --model-path /model \
  --host 0.0.0.0 \
  --port 30000 \
  --trust-remote-code \
  --mem-fraction-static 0.8 \
  --max-running-requests 8 \
  --tool-call-parser qwen3_coder \
  --reasoning-parser nemotron_3"

In another terminal, verify the server is ready:

curl -sS http://localhost:30000/v1/models | python3 -m json.tool
https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#key-spark-specific-flags-1Key Spark-Specific Flags

FlagPurposeSpark Guidance\-\-gpus allSelect GPUSpark has one GB10 GPU;allis equivalent todevice=0``\-\-context\-lengthMax context windowStart with default; reduce if you hit OOM (see Memory Tuning below)

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#memory-tuning-on-spark-1Memory Tuning on Spark

Spark usesunified LPDDR5X memory(~128 GB shared between CPU and GPU), not separate system + VRAM pools. Two levers, in order of impact:

  1. **Lower\-\-mem\-fraction\-static**from 0.80 → 0.70 to free ~13 GB back to the OS and re-enable weight prefetch. Cost: smaller KV cache budget.
  2. **Lower\-\-context\-length**to reduce KV cache allocation (e.g. halving context window halves KV cache at\-\-max\-running\-requests=1). Combined override:
--mem-fraction-static=0.70 \
  --context-length=32768 \

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#api-client-openai-compatibleAPI Client (OpenAI-compatible)

from openai import OpenAI
client = OpenAI(base_url="http://localhost:8000/v1", api_key="")
MODEL = "nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4"

Image Example

import base64
 
def image_to_data_url(path: str) -> str:
    with open(path, "rb") as f:
        b64 = base64.b64encode(f.read()).decode("utf-8")
    return f"data:image/jpeg;base64,{b64}"
 
image_url = image_to_data_url("media/example1a.jpeg")
 
response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "Describe this image in detail."},
                {"type": "image_url", "image_url": {"url": image_url}},
            ],
        }
    ],
    max_tokens=1024,
    temperature=0.2,
    extra_body={"top_k": 1, "chat_template_kwargs": {"enable_thinking": False}},
)
print(response.choices[0].message.content)

Audio Example

from pathlib import Path
 
audio_url = Path("media/2414-165385-0000.wav").resolve().as_uri()
 
response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "audio_url", "audio_url": {"url": audio_url}},
                {"type": "text", "text": "Transcribe this audio."},
            ],
        }
    ],
    max_tokens=1024,
    temperature=0.2,
    extra_body={"top_k": 1, "chat_template_kwargs": {"enable_thinking": False}},
)
print(response.choices[0].message.content)

Video Example

from pathlib import Path
 
video_url = Path("media/demo.mp4").resolve().as_uri()
reasoning_budget = 16384
grace_period = 1024
 
response = client.chat.completions.create(
    model=MODEL,
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "video_url", "video_url": {"url": video_url}},
                {"type": "text", "text": "Describe this video."},
            ],
        }
    ],
    max_tokens=20480,
    temperature=0.6,
    top_p=0.95,
    extra_body={
        "thinking_token_budget": reasoning_budget + grace_period,
        "chat_template_kwargs": {
            "enable_thinking": True,
            "reasoning_budget": reasoning_budget,
        },
        "mm_processor_kwargs": {"use_audio_in_video": False},
    },
)
print(response.choices[0].message.content)

Text Example (curl)

curl -sS http://localhost:8000/v1/chat/completions \
  -H "Content-Type: application/json" \
  -d '{"model":"nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4","messages":[{"role":"user","content":"Hello, what can you do?"}],"temperature":0.2,"top_k":1}' \
  | python3 -c "import sys,json; print(json.load(sys.stdin)['choices'][0]['message']['content'])"

PDF Example (page-by-page via Python)

The API acceptsimages, not raw PDF files. The script below renders each page to PNG and sends it as base64. Save as**pdf\_vlm\_chat\.py**and install dependencies:pip install pymupdf pillow requests.

pdf_vlm_chat.py (click to expand)``` #!/usr/bin/env python3 “”“Send PDF page(s) as images to a vLLM /v1/chat/completions endpoint.”“” from future import annotations

import argparse, base64, sys from io import BytesIO from pathlib import Path

import requests

try: import fitz from PIL import Image except ImportError: print(“Install: pip install pymupdf pillow requests”, file=sys.stderr) sys.exit(1)

USER_PROMPT = ( “Summarize this PDF page: main topic, section headings, important facts “ “or bullets, and a brief note on each figure or table. “ “Do not invent text you cannot read.” ) API_URL = “http://localhost:8000/v1/chat/completions” MODEL = “nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-NVFP4” MAX_TOKENS = 32000 DPI = 150

def page_to_b64(pdf_path: str, idx: int) -> str: doc = fitz.open(pdf_path) z = DPI / 72.0 pix = doc.load_page(idx).get_pixmap(matrix=fitz.Matrix(z, z)) img = Image.frombytes(“RGB”, [pix.width, pix.height], pix.samples) doc.close() buf = BytesIO() img.save(buf, format=“PNG”) return base64.b64encode(buf.getvalue()).decode(“ascii”)

def chat(url, model, b64, text, max_tokens): r = requests.post(url, json={ “model”: model, “messages”: [{“role”: “user”, “content”: [ {“type”: “text”, “text”: text}, {“type”: “image_url”, “image_url”: {“url”: f“data:image/png;base64,{b64}“}}, ]}], “max_tokens”: max_tokens, “stream”: False, “temperature”: 0.2, “chat_template_kwargs”: {“enable_thinking”: False}, }, timeout=120) r.raise_for_status() return r.json()[“choices”][0][“message”][“content”]

def main(): p = argparse.ArgumentParser() p.add_argument(“pdf”) p.add_argument(“–page”, type=int, default=0) p.add_argument(“–all-pages”, action=“store_true”) p.add_argument(“-o”, “–output”) p.add_argument(“–url”, default=API_URL) p.add_argument(“–model”, default=MODEL) p.add_argument(“–max-tokens”, type=int, default=MAX_TOKENS) a = p.parse_args()

doc = fitz.open(a.pdf); n = len(doc); doc.close()
pages = range(n) if a.all_pages else [a.page]
parts = [f"# Extracted: {Path(a.pdf).name}\n\n*Pages: {n}*\n"] if a.all_pages else []

for i in pages:
    print(f"Page {i+1}/{n} ...", file=sys.stderr)
    b64 = page_to_b64(a.pdf, i)
    text = chat(a.url, a.model, b64, f"Page {i+1}.\n\n{USER_PROMPT}", a.max_tokens)
    parts.append(f"\n---\n\n## Page {i+1}\n\n{text.strip()}\n" if a.all_pages else text.strip())

out = "\n".join(parts)
if a.output:
    Path(a.output).write_text(out + "\n", encoding="utf-8")
else:
    print(out)

if name == “main”: main()


**Single page:**

python3 pdf_vlm_chat.py /path/to/your_document.pdf –page 0


**All pages to markdown:**

python3 pdf_vlm_chat.py /path/to/your_document.pdf –all-pages -o extracted.md


Edit`USER\_PROMPT`in the script for different tasks \(detailed extraction, table parsing, etc\.\)\.

---

### [https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#reasoning-mode-enable_thinking](https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#reasoning-mode-enable_thinking)Reasoning Mode \(`enable\_thinking`\)

SettingBehavior**Default \(omitted\)**Reasoning is**on**\. The model emits chain\-of\-thought before the final answer, visible in`content`\.`"chat\_template\_kwargs": \{"enable\_thinking": false\}`Reasoning is**off**\. Only the final answer appears in`content`\.
To disable reasoning on a request, add to the JSON body:

“chat_template_kwargs”: {“enable_thinking”: false}


In the Python heredoc pattern, use`False`\(Python boolean\), not`false`\(invalid Python\)\.

We recommend thinking mode for tasks that involve reasoning and complex understanding\. For video, audio, and omni use cases, try both enabling and disabling thinking for best results\.

---

**Advanced: Budget\-Controlled Reasoning**```
from typing import Any, Dict, List

from openai import OpenAI
from transformers import AutoTokenizer

class ThinkingBudgetClient:
    def __init__(self, base_url: str, api_key: str, tokenizer_name_or_path: str):
        self.tokenizer = AutoTokenizer.from_pretrained(
            tokenizer_name_or_path, trust_remote_code=True
        )
        self.client = OpenAI(base_url=base_url, api_key=api_key)

    def chat_completion(
        self,
        model: str,
        messages: List[Dict[str, Any]],
        reasoning_budget: int = 512,
        max_tokens: int = 1024,
        **kwargs,
    ) -> Dict[str, Any]:
        assert max_tokens > reasoning_budget, (
            f"reasoning_budget must be less than max_tokens. "
            f"Got {max_tokens=} and {reasoning_budget=}"
        )

        # Step 1: generate only the reasoning trace up to the requested budget.
        response = self.client.chat.completions.create(
            model=model,
            messages=messages,
            max_tokens=reasoning_budget,
            extra_body={
                "top_k": 1,
                "chat_template_kwargs": {
                    "enable_thinking": True,
                },
            },
            **kwargs,
        )
        reasoning_content = response.choices[0].message.content or ""
        if "</think>" not in reasoning_content:
            print("No </think> found in reasoning content")
            reasoning_content = f"{reasoning_content}</think>\n\n"

        reasoning_tokens_len = len(
            self.tokenizer.encode(reasoning_content, add_special_tokens=False)
        )
        remaining_tokens = max_tokens - reasoning_tokens_len
        assert remaining_tokens > 0, (
            f"No tokens remaining for response ({remaining_tokens=}). "
            "Increase max_tokens or lower reasoning_budget."
        )

        # Step 2: continue from the closed reasoning trace and ask for the final answer.
        continued_messages = messages + [
            {"role": "assistant", "content": reasoning_content}
        ]
        prompt = self.tokenizer.apply_chat_template(
            continued_messages,
            tokenize=False,
            continue_final_message=True,
        )
        response = self.client.completions.create(
            model=model,
            prompt=prompt,
            max_tokens=remaining_tokens,
            extra_body={"top_k": 1},
            **kwargs,
        )

        return {
            "reasoning_content": reasoning_content.strip(),
            "content": response.choices[0].text,
            "finish_reason": response.choices[0].finish_reason,
        }

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#video-tuningVideo Tuning

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#frame-sampling—media-io-kwargsFrame sampling (\-\-media\-io\-kwargs)

Without explicit settings, vLLM may default to ~32 frames per video regardless of length. Always set\-\-media\-io\-kwargsat server launch (already included in the General Invocation above):

--media-io-kwargs '{"video": {"fps": 2, "num_frames": 256}}'

Recommendednum\_framesranges (atfps=2):

GPU memoryRecommendednum\_framesrange80 GB(A100/H100)128–512**≤40 GB**64–256 Higher values improve temporal coverage but increase VRAM and prefill time. Start at the low end of the range and increase as your workload and latency budget allow.


https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#notesNotes

  1. **Reasoning default:**Reasoning is on by default. If you omitchat\_template\_kwargs, the model will produce chain-of-thought traces incontent. This is appropriate for text and image inputs.
  2. **Video frame sampling:**The default (~32 frames) is too conservative for most real videos. Set\-\-media\-io\-kwargsat server launch.
  3. **PDF input format:**The API does not accept raw PDF uploads. Render pages to PNG and send as base64 (see PDF Example above).
  4. max\_tokensvs\-\-max\-model\-len:max\_tokensin the request caps only the completion (generated output). It cannot exceed the server’s\-\-max\-model\-len, which is the hard ceiling for prompt + completion combined. Increase the server flag if you need longer outputs.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#jetson-deploymentJetson Deployment

For Jetson deployments, vLLM, SGLang, Ollama, llama.cpp, and TensorRT Edge-LLM are supported inference frameworks; see theJetson AI Lab model pagefor more details.

TensorRT Edge-LLM support is only for Jetson Thor; TensorRT-LLM is not supported on Jetson.


https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#training-testing-and-evaluation-datasetsTraining, Testing, and Evaluation Datasets:

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#dataset-overviewDataset Overview

**Total Size:**354,587,705 data points (~717.0B tokens) **Total Number of Datasets:**1395 dataset entries

**Dataset partition:**Training [100%], Testing [N/A — evaluation benchmarks used separately], Validation [N/A — evaluation benchmarks used separately] **Time period for training data collection:**2019–2025 **Time period for testing data collection:**N/A (standard public benchmarks) **Time period for validation data collection:**N/A (standard public benchmarks)

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#dataset-descriptionDataset Description

Nemotron-Omni extends our commitment from text to multimodal, delivering the same level of openness across text, audio, image, and video.

Adapter and encoder training scale:~127B tokens across mixed modalities spanning text+image, text+video, text+audio, and text+video+audio—reflecting real-world, contextualized interactions versus single-modality data.

Post-training for real-world tasks:~124M curated examples across multimodal combinations (text+audio, text+image, text+video, and text+video+audio), structured to support document reasoning, computer use, and long-horizon workflows.

**RL environments for agent training:**20 RL datasets across 25 environments covering 5 new multimodal tasks—visual grounding, chart and document understanding, vision-critical STEM problems, video understanding, and automatic speech recognition—extending Nemotron’s RL pipeline beyond text into vision and audio.

Modality Breakdown:

ModalityDataset EntriesSamplesEst. Tokens (M)text+audio220259,178,821143,533.1text+image75070,143,901180,347.1text+video24115,837,673239,631.5text+video+audio1558,720,044152,499.2text12707,187958.4Total1395354,587,705****716,969.2 Training data for Nemotron-Omni was assembled from a diverse collection of audio, image, video, and text datasets. Raw datasets were first converted into a standardized JSONL format with unified conversation-turn structure. Audio data was resampled to 16 kHz where needed. Image and video datasets were paired with question-answer annotations, often regenerated or refined using large vision-language models to improve quality and consistency. Quality filtering was applied using model-based judges to remove low-quality, unsafe, or off-topic samples. Deduplication and CSAM scanning were performed across all image datasets. Data was then packed into fixed-length sequences (32k, 128k, or 256k tokens) for efficient training.

Multiple safety measures were implemented throughout the data pipeline. All image/text datasets underwent CSAM (Child Sexual Abuse Material) scanning, with results tracked per dataset. Content safety filtering was applied using two independent safety judge models to flag and remove samples containing harmful content including weapons references, criminal planning, sexual content involving minors, harassment, hate speech, profanity, threats, violence, or suicide-related content. Synthetic data generation pipelines included explicit quality and safety filtering stages. Identity-fix processing was applied to correct potential biases in generated responses. The multi-stage pipeline (original → cleaned → clean+safe → clean+safe+holdout) ensured progressive refinement, with each stage removing additional problematic content.

We built on the base model, applying additional training, enhancements, and optimizations on top of it.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#public-datasetsPublic Datasets

DatasetSamples% of PublicTokens (M)ModalityMiraData28,252,30755.53%14,181.3text+audio+videolaion-disco-12M7,507,57414.7%22,691.0text+audioYouTube Video2,057,0004.0%15,390text+videoYouTube Video and Audio1,164,0002.2%18,730text+video+audio

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#private-datasetsPrivate Datasets

DatasetSamples% of PrivateTokens (M)ModalityGranary23,370,2748.0%1,471.7text+audioSIFT-50M22,837,5007.8%5,241.7text+audio

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#self-sourced-synthetic-dataSelf-Sourced Synthetic Data

  • Overall Size: 41,502,625 samples across modalities: text+audio, text+image, text+video
  • Description of synthetic data generation methods:

Synthetic data generation (SDG) was used to improve data quality, generate reasoning traces, re-label annotations, and augment existing datasets. Methods include: re-captioning images and audio using vision-language models, generating question-answer pairs from existing media, producing thinking/reasoning chains for complex tasks, paraphrasing prompts for diversity, and applying model-based quality filtering.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#nvidia-sourced-synthetic-datasetsNVIDIA-Sourced Synthetic Datasets

DatasetModalityCountModels UsedGroundCUAtext+image2,797,851gpt-oss-120b, Qwen3-VL-30B-A3B-InstructOpenImagestext+image2,556,412Qwen3-VL-30B-A3B-InstructMMTrailtext+audio1,620,533Qwen3-omni-captioner, gpt-oss-120BLocalized Narrativestext+image1,511,812Qwen3-VL-30B-A3B-InstructALLaVAtext+image1,414,130Qwen3-VL-30B-A3B-InstructVGG-Soundtext+audio1,371,167Qwen3-omni-captioner, gpt-oss-120BPIXMO-CAPtext+image1,308,838Qwen3-VL-30B-A3B-InstructTTS-Synthesized Nemotron-Nano-3 SFT Datatext+audio1,226,784NVIDIA Magpie TTSMINT-1Ttext+image904,035Qwen3-VL-32B-Instruct, Gemini 3 Pro for filtering, Scene Text models (RTX) translateScaleCUAtext+image889,010Qwen3-VL-30B-A3B-InstructAgentNettext+image878,986Kimi-K2.5Conceptual Captions 3M-30btext+image867,065Qwen3-VL-30B-A3B-Thinking-FP8MetaMathQAtext+image860,656Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringMulberry-SFT COTtext+image566,982GLM-4.1V-9B-Thinking, Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringCC for OCRtext+image522,595SwinDocSegmenter, DeepSeek OCR, Qwen3.5-122B-A10B, Qwen3-32B, Gemini 3 Flash Preview for filtering, GPT-4o mini for filtering & quality checks, Qwen3-VL-30B-A3B-Thinking-FP8, gpt-oss-120bCharxiv-100Ktext+image272,104Qwen3-VL-235B-A22B-Instruct, Qwen3-VL-235B-A22B-Thinking, GPT-4o for filtering, Qwen3.5-122B-A10BSwinDocSegmentertext+image207,200SwinDocSegmenter, DeepSeek OCRCLEVRtext+image, text+video197,027Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringInternVL-Datatext+image185,395Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringFlickr30k Entitiestext+image154,760Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringMetropolis and Litatext+video150,434Qwen3.5-122B-A10BTextCapstext+image136,911Commercial VILA model, Qwen3-VL-30B-A3B-InstructVision R1 Llava CoTtext+image126,024GLM-4.1V-9B-ThinkingHC-STVGtext+video124,902NVIDIA relabeled using Qwen model (Qwen2.5-VL-72B-Instruct)nvPDFtextext+image118,351gpt-oss-120b, Qwen3.5-122B-A10BChartQAtext+image111,602Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering, Qwen2-VL-72B (NV)ECD-10k-Imagestext+image110,697Qwen3.5-122B-A10BSAMA-COCOtext+image102,965gpt-oss-120BVisualWebInstructtext+image97,746Earlier SDG, GLM-4.1V-9B-ThinkingSpatialtext+image95,532Microsoft Florence-2-largeDoubtNuttext+image94,919Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringCosmos Nemotron SFTv13.9text+image92,128Qwen3-VL-30B-A3B-Instruct, Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringCrossTasktext+video76,495NVIDIA relabeled using Qwen model (Qwen2.5-VL-72B-Instruct)RefCOCOtext+image69,850Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringMantis Instructtext+image66,975Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringVisual7Wtext+image62,589Qwen3.5-122B-A10BScreenQAtext+image62,186Qwen3.5-122B-A10BVQAV2text+image54,899Qwen3.5-122B-A10BTallyQAtext+image50,073Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringKeenSighttext+image49,849Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringGQAtext+image42,182Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringAskFilotext+image41,807Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringRaventext+image41,996gpt-oss-120bDocVQAtext+image35,759Qwen3.5-122B-A10BTextVQAtext+image34,602Commercial VILA model, Qwen3-VL-30B-A3B-InstructCOCOtext+image32,111Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringPlotQAtext+image30,665Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringLlavatext+video30,250Qwen3-Omni-30B-A3B-Instruct, Qwen3-VL-32B-InstructNVCLIPtext+image29,680Qwen2.5-72B-InstructTapostext+video29,250Qwen2.5-VL-72B-InstructVedantu Chemistrytext+audio26,338NVIDIA Magpie TTSNV-CC-Img-Text-Datasettext+image24,998Qwen3-VL-30B-A3B-InstructDocLayNettext+image22,709Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering, gpt-oss-120bTaloka Groundingtext+image22,218Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringWikipedia OCRtext+image21,440Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringInternVL2.5text+image20,770Qwen3-VL-235B-A22B-Instruct, Qwen3-VL-235B-A22B-Thinking, GPT-4o for filtering, Qwen3.5-122B-A10BPromptPGtext+image20,305Qwen2-VL-72BPubTablestext+image20,174gpt-oss-120bInfoVQAtext+image18,679Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringAzure Tablestext+image18,188gpt-oss-120b, Qwen3.5-122B-A10BTabRecSettext+image17,437GPT-4o mini, Qwen3-VL-30B-A3B-Thinking-FP8, gpt-oss-120b, Qwen3.5-122B-A10BCD Questionstext+audio, text+image16,335NVIDIA Magpie TTS, Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringLinguistic Data Consortiumtext+image15,499Qwen3.5-122B-A10B, GPT-4o mini, Qwen3-VL-30B-A3B-Thinking-FP8, gpt-oss-120b, Ask KaterynaMapQAtext+image12,480Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringSlideVQAtext+image11,199Qwen3.5-122B-A10BOCR Reason Financetext+image9,389Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringGeomVersetext+image9,298GLM-4.1V-9B-ThinkingNextQAtext+video8,903Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringUniGeotext+image8,822Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringVedantutext+audio, text+image8,750NVIDIA Magpie TTS, Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringGPQAtext+audio7,657NVIDIA Magpie TTSSLAKEtext+image7,294Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringOpenGVLabtext+image7,269Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filtering, Qwen3-VL-235B-A22B-Instruct, Qwen3-VL-235B-A22B-Thinking, GPT-4o for filteringPerceptionTesttext+video5,192Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringInvoicesQAtext+image4,817Qwen3.5-122B-A10BEgoProceltext+video4,660Qwen2.5-VL-72B-InstructSynthTabNettext+image4,364gpt-oss-120bSerpAPItext+image3,784Qwen3.5-122B-A10B, Gemini 3 Flash Preview for filteringFinTabNettext+image3,852gpt-oss-120bFastMathtext+image3,718Qwen3-VL-235B-A22B-Instruct-FP8ASR Data Derived Speech-to-Text Chat Datatext+audio3,608GPT-OSS 120BGeometry3ktext+image2,078Qwen3-VL-235B-A22B-Thinking-FP8VQA-RADtext+image1,270Qwen3.5-122B-A10BRQAtext+audio959NVIDIA Magpie TTSHierText OCRQA Qwentext+image514Qwen2.5-VL-32B-Instruct

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#training-datasetTraining Dataset:

Data Modality

  • Audio
  • Image
  • Text
  • Video

Audio Training Data Size

  • 10,000 to 1 Million Hours (267,898,865 audio-containing samples)

Image Training Data Size

  • 1 Million to 1 Billion Images (70,143,901 image-containing samples)

Text Training Data Size

  • 1 Billion to 10 Trillion Tokens (~717.0B tokens total across all modalities)

Video Training Data Size

  • 10,000 to 1 Million Hours (24,557,717 video-containing samples)

Data Collection Method by dataset

  • Hybrid: Human, Automated, Synthetic

Labeling Method by dataset

  • Hybrid: Human, Automated, Synthetic

**Properties (Quantity, Dataset Descriptions, Sensor(s)):**354,587,705 total data items across 1395 datasets. The training data spans five modality combinations: text+audio (259,178,821 samples), text+image (70,143,901 samples), text+video (15,837,673 samples), text+video+audio (8,720,044 samples), and text-only (707,187 samples). Content includes publicly available academic datasets, licensed third-party data, NVIDIA-internal collections, and synthetically generated annotations. The data is primarily in English. No sensor-derived data was used.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#evaluation-datasetEvaluation Dataset:

Benchmark Scores:

TaskMultimodal BenchmarksNemotron 3 Nano OmniNemotron Nano VL V2% ImprovementGroundingCVBench2D83.9578.36.73DocumentOCRBenchV2 (EN)67.0454.818.26Computer UseOSWorld47.411.176.58Chart ReasoningCharxiv Reasoning63.641.335.06Multi-Image ReasoningMMlongBench Doc57.53833.91Math ReasoningMathVista_MINI82.875.58.82OCR ReasoningOCR_Reasoning54.1433.933.87Video Q/AVideo MME72.2--Video + Audio Q/AWorld Sense55.4--Video + Audio Q/ADaily Omni74.52--Speech Instruction FollowingVoice interaction89.39-- Quantization Benchmark Scores:

We release FP8 and NVFP4 quantized variants alongside the BF16 model. The FP8 variant quantizes every linear layer in the language model to per-tensor E4M3 (with the exception of the MoE router andlm\_head) and pairs it with an FP8 KV cache, yielding 8.5 effective bits per weight (32.8 GB). The NVFP4 variant uses a mixed-precision recipe inspired by Nemotron 3 Super: routed MoE experts are quantized to NVFP4 (FP4 E2M1 values with per-block FP8 E4M3 scales over groups of 16 elements and an additional per-tensor FP32 global scale), while the Mambain\_proj/out\_proj, shared experts, and attentiono\_projare quantized to FP8, yielding 4.98 effective bits per weight (20.9 GB). In both variants the vision and audio encoders and their MLP projectors are kept in BF16.

The table below reports FP8 & NVFP4 accuracy against a BF16 baseline using non-reasoning mode. Across 9 multimodal benchmarks, both quantized variants stay within 1 point of BF16 on average.

FootprintBF16FP8NVFP4Size (GB)61.532.820.9Effective bpw16.008.54.98 BenchmarkBF16FP8NVFP4MathVista_MINI71.9071.0571.30Charxiv Reasoning49.1048.0547.95MMlongBench Doc46.1045.8445.78OCRBenchV2 (EN)65.8065.6365.77CVBench2D84.2085.6285.27Video MME70.8069.4069.60Daily Omni74.5074.0674.23World Sense55.2054.4054.60MMAU74.6274.5674.34Tedium Long (WER↓)3.113.123.04HF-ASR (WER↓)5.955.975.95Mean (9 non-ASR)65.8065.4065.43Median (9 non-ASR)70.8069.4069.60Δ vs BF16 (mean)---−0.40−0.38 Data Collection Method by dataset:

  • Hybrid: Human, Automated — Evaluation benchmarks are primarily human-curated public academic datasets with automated scoring.

Labeling Method by dataset:

  • Human

**Properties (Quantity, Dataset Descriptions, Sensor(s)):**14 evaluation benchmarks spanning image understanding (MathVistaMini, Charxiv Reasoning, MMLongBench-Doc, OCR Reasoning, OCRBenchV2 English, CVBench2D, OSWorld), video understanding (Video MME), audio/speech understanding (VoiceBench, Tedium Long, HF-ASR, MMAU, World Sense), and multimodal omni-understanding (Daily Omni). All benchmarks are publicly available academic datasets in English.

Prior to training this model, NVIDIA implemented measures to respect EU text and data mining opt-outs by (1) respecting robots.txt instructions to the extent such signals reflect valid rights reservations, and (2) filtering datasets on any actionable metadata identifiers provided by rightsholders.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#inferenceInference:

**Acceleration Engine:**TensorRT-LLM, vLLM, TensorRT Edge-LLM, llama.cpp, ollama, SGlang Test Hardware:

  • NVIDIA H100 SXM
  • NVIDIA H200 SXM
  • NVIDIA B200 SXM
  • NVIDIA A100 80GB SXM
  • NVIDIA GB200 NVL72
  • NVIDIA RTX PRO 6000 SE Blackwell
  • NVIDIA L40S PCIe 48GB
  • NVIDIA DGX Spark
  • NVIDIA Jetson Thor
  • NVIDIA RTX 5090

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#best-practicesBest Practices

We recommend following settings for reaching the optimal performance.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#sampling-parametersSampling Parameters

We suggest the following sampling parameters based on the mode and tasks.

  • Thinking mode for long document analysis and multimodal reasoning tasks: temperature=0\.6,top\_p=0\.95,grace\_period=1024,reasoning\_budget=16384,max\_token=20480, andmax\_model\_len=210000
  • Instruct mode (non-thinking) for general tasks: temperature=0\.2,top\_k=1
  • For ASR tasks, we recommend non-thinking mode with temperature=1\.0,top\_k=1

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#model-output-lengthModel output length

For most multimodel reasoning tasks, we recommend using output length of at least 20480. For complex reasoning questions especially in math and programing increasing the maximum output length to 210000 tokens can give the model enough room to produce more detailed and correct answers. We also found the proposed Budget-Controlled Reasoning effectiveness in answering complex reasoning questions.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#ethical-considerationsEthical Considerations:

NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.

Please make sure you have proper rights and permissions for all input image and video content; if image or video includes people, personal health information, or intellectual property, the image or video generated will not blur or maintain proportions of image subjects included.

For more detailed information on ethical considerations for this model, please see the Model Card++Bias,Explainability,Safety & Security, andPrivacySubcards.

Please report model quality, risk, security vulnerabilities or NVIDIA AI Concernshere.

https://huggingface.co/nvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16#citationCitation:

@misc{nvidia2026nemotron3nanoomni,
      title={Nemotron 3 Nano Omni: Efficient and Open Multimodal Intelligence}, 
      author={NVIDIA},
      year={2026},
      eprint={2604.24954},
      archivePrefix={arXiv},
      primaryClass={cs.LG},
      url={https://arxiv.org/abs/2604.24954}, 
}

Downloads last month89,837

Dataset used to trainnvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16

#### nvidia/Nemotron-Image-Training-v3 Viewer• Updated10 days ago • 6.92M • 4.8k • 55

Spaces usingnvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF164

Collection includingnvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16

Paper fornvidia/Nemotron-3-Nano-Omni-30B-A3B-Reasoning-BF16

Similar Articles

OpenAI o3-mini

OpenAI Blog

OpenAI releases o3-mini, a cost-efficient reasoning model with strong STEM capabilities, available in ChatGPT and API with support for function calling, structured outputs, and three reasoning effort levels. The model matches o1 performance in math and coding while being faster and cheaper, with free plan users gaining access to a reasoning model for the first time.

Thinking with images

OpenAI Blog

OpenAI releases o3 and o4-mini models that can reason with images in their chain-of-thought process, enabling visual understanding through native image manipulation tools like cropping and zooming without separate specialized models. These models achieve state-of-the-art performance on multimodal benchmarks including STEM questions, chart reading, and visual search tasks.

OpenAI o3 and o4-mini System Card

OpenAI Blog

OpenAI released system cards for o3 and o4-mini models, which feature advanced reasoning capabilities combined with tool integration (web browsing, Python, image analysis, etc.) and are evaluated under OpenAI's Preparedness Framework v2 for safety in biological, cybersecurity, and AI self-improvement domains.