@GitTrend0x: 卧槽兄弟们 本地跑语音克隆+电影级视频配音,直接支持646种语言,完全离线、无API密钥、无需联网,ElevenLabs直接被干翻 https://github.com/debpalash/OmniVoice-Studio… 这波开源神器…
摘要
OmniVoice Studio is an open-source desktop app that enables local voice cloning and cinematic video dubbing across 646 languages, fully offline with no API keys, positioning itself as a privacy-focused alternative to ElevenLabs.
查看缓存全文
缓存时间: 2026/05/14 02:29
卧槽兄弟们 本地跑语音克隆+电影级视频配音,直接支持646种语言,完全离线、无API密钥、无需联网,ElevenLabs直接被干翻 https://github.com/debpalash/OmniVoice-Studio… 这波开源神器 OmniVoice Studio 太狠了: 3秒音频零样本克隆任何声音,跨646语言瞬间复制 YouTube链接或本地视频一键配音,自动转录+翻译+重配音,导出MP4丝滑到爆 全局快捷键实时语音输入,任何App里说话直接转文字粘贴 声轨分离+说话人识别,背景音乐自动剥离,专业级处理 批量队列一次丢50个视频,后台自动跑,进度全看得见 macOS/Windows/Linux全平台桌面App,下载即用,4GB模型自动拉,GPU/CPU智能切换,隐私拉满,数据永远不离你的电脑! 转发给还在云端烧钱的兄弟,这才是真正的本地 AI语音天花板!
debpalash/OmniVoice-Studio
Source: https://github.com/debpalash/OmniVoice-Studio
OmniVoice Studio
The open-source ElevenLabs alternative.
Real-time dictation, zero-shot voice cloning, and cinematic video dubbing — all on your desktop.
Open-source, no API keys, fully local. 646 languages.
Quickstart · Features · Why OmniVoice Studio? · TTS Engines · Contributing · Discord
OmniVoice Studio is in active beta. Things may break between releases. For the latest features and fixes, clone the repo and run from source rather than using pre-built installers. Bug reports and PRs are very welcome — open an issue or join Discord.
Features
🎙️ Voice Cloning3-second clip → mirror any voice. |
🎨 Voice DesignGender, age, accent, pitch, speed, |
🎬 Video DubbingYouTube URL or file → transcribe → |
⌨️ Dictation Widget
|
🔊 Vocal IsolationDemucs-powered. Splits speech |
👥 Speaker DiarizationPyannote + WhisperX. |
📦 Batch QueueDrop 50 videos, walk away. |
🤖 MCP ServerUse OmniVoice from Claude, |
🛡️ AI WatermarkAudioSeal (Meta). Invisible, |
🔐 100% LocalNo keys, no cloud, no accounts. |
⚡ GPU Auto-DetectCUDA · MPS · ROCm · CPU. |
🧩 ExtensibleSubclass |
Quickstart
Pick your path — from zero-install to full developer setup:
🖥️ Option 1 — Desktop App
Pre-built installers (~6–8 MB) are on the Releases page. Download, install, launch. The app bootstraps a Python environment and downloads model weights automatically — the splash screen shows progress.
macOS — "app is damaged and can't be opened"
macOS quarantines apps downloaded outside the App Store. After dragging to /Applications:
xattr -cr /Applications/OmniVoice\ Studio.app
Open normally after. One-time fix.
Windows — first launch takes 5–10 minutes
The app bootstraps a Python virtual environment, installs dependencies, and downloads ffmpeg on first run. The splash screen shows each step. Subsequent launches start in seconds.
Linux — AppImage needs FUSE
If FUSE isn’t available, use the .deb package or extract-and-run:
chmod +x OmniVoice.Studio_*.AppImage
./OmniVoice.Studio_*.AppImage --appimage-extract-and-run
🐳 Option 2 — Docker
Pull the pre-built image from GitHub Container Registry:
docker pull ghcr.io/debpalash/omnivoice-studio:latest
Run it:
# CPU mode
docker run -d --name omnivoice \
-p 127.0.0.1:3900:3900 \
-v omnivoice-data:/app/omnivoice_data \
ghcr.io/debpalash/omnivoice-studio:latest
# NVIDIA GPU mode
docker run -d --name omnivoice --gpus all \
-p 127.0.0.1:3900:3900 \
-v omnivoice-data:/app/omnivoice_data \
ghcr.io/debpalash/omnivoice-studio:latest
Or use Docker Compose:
# CPU
docker compose -f deploy/docker-compose.yml up -d
# GPU
docker compose -f deploy/docker-compose.yml --profile gpu up -d
Open localhost:3900 once the health check passes. First run downloads ~4 GB of model weights — progress in docker compose logs -f.
Build from source instead of pulling
docker compose -f deploy/docker-compose.yml up --build -d
Network access: the container binds to
127.0.0.1only. To expose on your LAN, change the port mapping to"0.0.0.0:3900:3900". OmniVoice ships no authentication — put it behind a reverse proxy with auth (Caddybasic_auth, nginx + htpasswd, Tailscale, etc.).
⚡ Option 3 — From Source
git clone https://github.com/debpalash/OmniVoice-Studio.git && cd OmniVoice-Studio
bun install && bun run dev
Open localhost:3901 and start cloning voices. Hot-reload enabled for both frontend and backend.
bun run desktop # Build the native desktop app from source
| Service | URL | Stack |
|---|---|---|
| Backend | localhost:3900 | FastAPI · 97 endpoints · WhisperX · Demucs · OmniVoice |
| Frontend | localhost:3901 | React · Vite · Waveform timeline · Glassmorphism UI |
| API Docs | localhost:3900/docs | Scalar — interactive API reference |
First run downloads model weights (~2.4 GB). No account needed. For faster downloads, optionally set
HF_TOKEN=hf_...in your environment (get a free token here).Having issues? Join our Discord for setup help and troubleshooting.
Screenshots
Voice Clone Drop a 3-second clip → mirror any voice. 646 languages, zero-shot. |
Voice Design Build new voices from scratch — gender, age, accent, pitch, style. |
Video Dubbing Upload or paste a YouTube URL. Transcribe, translate, re-voice, export. |
Voice Gallery Search YouTube, browse categories, download clips, build your library. |
Settings → Models 15 models. One-click install. Auto-detects your platform (CUDA / MPS / CPU). |
Projects Dub projects, voice profiles, generation history, exports — all searchable. |
Settings → Logs Live backend, frontend, and Tauri runtime logs. Filter, refresh, clear. |
|
Why OmniVoice Studio?
ElevenLabs charges 5–330/mo and processes your audio on their servers. OmniVoice Studio runs on your hardware, with no usage limits.
| ElevenLabs | OmniVoice Studio | |
|---|---|---|
| Pricing | 5–330/mo, per-character billing | Free for personal use · Commercial license for business |
| Voice Cloning | ✅ 3s clip | ✅ 3s clip, zero-shot |
| Voice Design | ✅ Gender, age | ✅ Gender, age, accent, pitch, style, dialect |
| Languages | 32 | 646 |
| Video Dubbing | ✅ Cloud-only | ✅ Fully local |
| Data Privacy | Audio sent to cloud | Nothing leaves your machine |
| API Keys | Required | Not needed |
| GPU Support | N/A (cloud) | CUDA · Apple Silicon · ROCm · CPU |
| Desktop App | ❌ | ✅ macOS · Windows · Linux |
| Customizable | ❌ Closed | ✅ Fork it, extend it, ship it |
OmniVoice Studio gives you professional-grade AI tools without the subscription or the cloud.
System Requirements
| Minimum | Recommended | |
|---|---|---|
| OS | Windows 10, macOS 12+, Ubuntu 20.04+ | Any modern 64-bit OS |
| RAM | 8 GB | 16 GB+ |
| VRAM (GPU) | 4 GB (auto-offloads TTS to CPU) | 8 GB+ (NVIDIA RTX 3060+) |
| Disk | 10 GB free (models + cache) | 20 GB+ SSD |
| Python | 3.10+ (managed by uv) | 3.11–3.12 |
| GPU | Optional — CPU works | NVIDIA CUDA · Apple Silicon MPS · AMD ROCm |
On GPUs with ≤8 GB VRAM, OmniVoice automatically offloads TTS to CPU during transcription — no config needed. A dedicated GPU is not required; the entire pipeline runs on CPU (just slower).
TTS Engines
OmniVoice ships a multi-engine TTS backend. The default engine (OmniVoice) is always available; additional engines are opt-in and auto-detected. Switch engines in Settings → TTS Engine or via the OMNIVOICE_TTS_BACKEND env var.
| Engine | Languages | Clone | Instruct | Linux | macOS ARM | Windows | License |
|---|---|---|---|---|---|---|---|
| OmniVoice (default) | 600+ | ✅ | ✅ | ✅ CUDA/CPU | ✅ MPS | ✅ CUDA/CPU | Built-in |
| CosyVoice 3 | 9 + 18 dialects | ✅ | ✅ | ✅ CUDA/CPU | ✅ MPS | ✅ CUDA/CPU | Apache-2.0 |
| MLX-Audio (Kokoro, Qwen3-TTS, CSM, Dia, …) | Multi | Varies | Varies | ❌ | ✅ Native | ❌ | Varies |
| VoxCPM2 | 30 | ✅ | ✅ | ✅ CUDA/CPU | ✅ MPS | ✅ CUDA/CPU | Apache-2.0 |
| MOSS-TTS-Nano | 20 | ✅ | ❌ | ✅ CUDA/CPU | ✅ CPU | ✅ CUDA/CPU | Apache-2.0 |
| KittenTTS | English | ❌ | ❌ | ✅ CPU | ✅ CPU | ✅ CPU | MIT |
CUDA = GPU-accelerated · MPS = Apple Silicon Metal · CPU = runs everywhere, slower for large models · KittenTTS and MOSS-TTS-Nano run realtime on CPU · MLX-Audio is Apple Silicon only.
Architecture
┌─────────────────────────────────────────────────┐
│ Frontend (React) │
│ DubTab · VoicePreview · BatchQueue · Gallery │
├─────────────────────────────────────────────────┤
│ Backend (FastAPI) │
│ 97 API endpoints · SSE streaming · SQLite │
├──────────┬──────────┬──────────┬────────────────┤
│ WhisperX │ Demucs │OmniVoice │ Pyannote │
│ ASR │ Source │ TTS │ Diarization │
│ │ Sep. │ │ │
└──────────┴──────────┴──────────┴────────────────┘
CUDA / MPS / ROCm / CPU (auto-detected)
Roadmap
✅ Shipped
| Category | Features |
|---|---|
| Dubbing | Full pipeline (transcribe→translate→synthesize→mux), scene-aware splitting, lip-sync scoring, streaming TTS |
| Voice | Zero-shot cloning, voice design, A/B comparison, voice preview widget, gallery with favorites/tags |
| Audio | Demucs vocal isolation, per-segment gain, selective track export, stem/SRT/VTT/MP3 export |
| Multi-Lang | Multi-language batch picker, batch dubbing queue with sequential GPU execution |
| Diarization | Pyannote ML diarization, auto speaker clone extraction, per-speaker voice assignment |
| Infra | Docker deployment, CUDA/MPS/ROCm auto-detect, cuDNN 8 compat, VRAM-aware model offloading |
| AI Provenance | AudioSeal invisible watermarking (SynthID-like), video logo overlay, watermark detection API |
| UX | Undo/redo, keyboard shortcuts, drag-and-drop, session persistence, glassmorphism design system |
| Real-time Events | WebSocket event bus — instant sidebar refresh on data mutations, exponential backoff reconnect |
| State Management | Zustand store migration — uiSlice, pillSlice, dubSlice, generateSlice, prefsSlice, glossarySlice |
| Desktop | Cross-platform Tauri installers (macOS DMG, Windows MSI, Linux deb/AppImage), auto-update infrastructure |
| Windows Hardening | Cross-platform log paths, Triton workaround, HF symlink bypass, 300s health check timeout |
| Dictation | Global system-wide hotkey (⌘+⇧+Space), frameless floating widget, streaming ASR via WebSocket, auto-paste |
| Batch Pipeline | Full batch TTS: extract → transcribe → translate → generate → mix → export, with live progress tracking |
🔜 Up Next
- 🎬 Lip-sync v2 — visual speech timing with wav2lip
- 📖 Audiobook Editor — chapter-aware long-form narration
- 🌐 Hosted Demo — try OmniVoice without installing anything
- 🔌 Plugin Marketplace — community-contributed TTS engines and effects
Contributing
We welcome contributions of all kinds — bug fixes, new TTS engine adapters, UI improvements, docs, and translations.
- 📖 Read the Contributing Guide for setup, code style, and PR workflow
- 🐛 Browse good first issues
- 💬 Join our Discord to discuss ideas or ask for help
FAQ
Is this really as good as ElevenLabs?
For voice cloning and dubbing, yes — OmniVoice uses a state-of-the-art diffusion TTS model with 646 languages (ElevenLabs supports 32). Quality is comparable for most use cases. Where ElevenLabs wins is in their polished cloud API and pre-made voice library. OmniVoice wins on privacy, cost, language coverage, and customizability.
Does it work on Apple Silicon (M1/M2/M3/M4)?
Yes. MPS acceleration is auto-detected. MLX-optimized Whisper models are available for faster transcription on Apple hardware.
How much VRAM do I need?
4 GB minimum. With ≤8 GB, the TTS model is automatically offloaded to CPU during transcription. With 8+ GB, everything runs on GPU simultaneously. No GPU at all? CPU mode works — just slower (~3× for TTS).
Can I use this commercially?
Personal, educational, internal-team, and non-commercial use is free under FSL-1.1-ALv2. Building a competing product or service on top of OmniVoice Studio requires a commercial license — see License. Pricing tiers coming soon. Each release converts to Apache 2.0 two years after publication.
What languages are supported?
646 languages for TTS via the OmniVoice model. Transcription (WhisperX) supports 99 languages. Translation coverage depends on the target language pair.
Can I add my own TTS engine?
Yes. OmniVoice uses a built-in backend registry. To add an engine in ~50 lines, subclass
TTSBackend in backend/services/tts_backend.py and add it to the _REGISTRY dictionary at the bottom. Six engines are built in: OmniVoice, CosyVoice, MLX-Audio (14+ sub-engines), VoxCPM2, MOSS-TTS-Nano, and KittenTTS. See the TTS Engines section for details.
License
OmniVoice Studio is source-available under the Functional Source License (FSL-1.1-ALv2).
Free for personal, educational, research, internal team, and non-commercial use. Each release converts to Apache 2.0 automatically two years after publication.
Business / enterprise users building a competing product or service on top of OmniVoice Studio need a commercial license. Pricing tiers coming soon. For inquiries in the meantime, reach out at [email protected].
See LICENSE for the full terms.
Acknowledgments
OmniVoice Studio is built on the shoulders of exceptional open-source work:
| Project | Role |
|---|---|
| OmniVoice (k2-fsa) | Zero-shot diffusion TTS engine — the core voice synthesis model |
| WhisperX | Word-level speech recognition and alignment |
| Demucs (Meta) | Music source separation for vocal isolation |
| Pyannote | Speaker diarization — who said what |
| CTranslate2 | Optimized Transformer inference on CPU and GPU |
| AudioSeal (Meta) | Invisible neural audio watermarking for AI provenance |
| Tauri | Native desktop app framework |
相似文章
@GoJun315: 本地跑的开源 TTS,把 ElevenLabs 干掉了。 Supertonic,完全跑在本地的语音合成模型,不联网、零 API 费用。 - 仅 99M 参数,M4 Pro 上比实时快 167 倍,树莓派也能跑 - 支持 31 种语言,覆盖…
Supertonic is a lightning-fast, on-device TTS model with 99M parameters, supporting 31 languages. It runs locally with no API costs, outperforms cloud TTS on accuracy for numbers, phone numbers, and technical terms, and can be installed via Python, Node.js, Rust, Go, and more.
@Honcia13: 开源TTS直接卷疯了!园区诈骗又有新武器? 清华 OpenBMB 刚刚放出 VoxCPM2: 200亿参数 + 200万小时多语言数据训练,48kHz录音棚级音质! 最狠的是——完全不用Tokenizer,直接在连续潜空间做扩散自回归,细…
清华大学 OpenBMB 发布了 VoxCPM2,这是一个拥有 200 亿参数的开源多语言 TTS 模型,支持无需 Tokenizer 的连续潜空间扩散自回归生成,具备 48kHz 录音棚级音质和强大的声音克隆与设计能力。
k2-fsa/OmniVoice
OmniVoice 是一款大规模多语言零样本文本转语音模型,支持超过 600 种语言,基于扩散语言模型架构构建,具备快速推理和语音克隆能力。
@GitHub_Daily: GitHub 上一款专为 Mac 打造的纯本地语音转文字开源工具:MacParakeet,识别准确率颇高。 支持直接拖拽音视频文件,或者贴个 YouTube 链接,就能快速输出带时间戳和说话人标签的文稿。 还能同时录制电脑系统声音和麦克风…
MacParakeet is a new open-source Mac application that provides fast, fully local voice transcription using Apple's Neural Engine and NVIDIA's Parakeet model, ensuring privacy by keeping audio data on-device.
@taiyo_ai_gakuse: 哥们,我真的做了个超棒的东西哈哈,我自己构建了一个CLI,集成了新发布的GPT-Realtime-2,……
一位开发者分享了一个自定义CLI工具,利用新发布的GPT-Realtime-2 API,在视频会议平台中实现日英实时语音翻译。