@GitTrend0x: 卧槽兄弟们 本地跑语音克隆+电影级视频配音,直接支持646种语言,完全离线、无API密钥、无需联网,ElevenLabs直接被干翻 https://github.com/debpalash/OmniVoice-Studio… 这波开源神器…

X AI KOLs Timeline 产品

摘要

OmniVoice Studio is an open-source desktop app that enables local voice cloning and cinematic video dubbing across 646 languages, fully offline with no API keys, positioning itself as a privacy-focused alternative to ElevenLabs.

卧槽兄弟们 本地跑语音克隆+电影级视频配音,直接支持646种语言,完全离线、无API密钥、无需联网,ElevenLabs直接被干翻 https://github.com/debpalash/OmniVoice-Studio… 这波开源神器 OmniVoice Studio 太狠了: 3秒音频零样本克隆任何声音,跨646语言瞬间复制 YouTube链接或本地视频一键配音,自动转录+翻译+重配音,导出MP4丝滑到爆 全局快捷键实时语音输入,任何App里说话直接转文字粘贴 声轨分离+说话人识别,背景音乐自动剥离,专业级处理 批量队列一次丢50个视频,后台自动跑,进度全看得见 macOS/Windows/Linux全平台桌面App,下载即用,4GB模型自动拉,GPU/CPU智能切换,隐私拉满,数据永远不离你的电脑! 转发给还在云端烧钱的兄弟,这才是真正的本地 AI语音天花板!
查看原文 导出为 Word 导出为 PDF
查看缓存全文

缓存时间: 2026/05/14 02:29

卧槽兄弟们 本地跑语音克隆+电影级视频配音,直接支持646种语言,完全离线、无API密钥、无需联网,ElevenLabs直接被干翻 https://github.com/debpalash/OmniVoice-Studio… 这波开源神器 OmniVoice Studio 太狠了: 3秒音频零样本克隆任何声音,跨646语言瞬间复制 YouTube链接或本地视频一键配音,自动转录+翻译+重配音,导出MP4丝滑到爆 全局快捷键实时语音输入,任何App里说话直接转文字粘贴 声轨分离+说话人识别,背景音乐自动剥离,专业级处理 批量队列一次丢50个视频,后台自动跑,进度全看得见 macOS/Windows/Linux全平台桌面App,下载即用,4GB模型自动拉,GPU/CPU智能切换,隐私拉满,数据永远不离你的电脑! 转发给还在云端烧钱的兄弟,这才是真正的本地 AI语音天花板!


debpalash/OmniVoice-Studio

Source: https://github.com/debpalash/OmniVoice-Studio

OmniVoice Logo

OmniVoice Studio

The open-source ElevenLabs alternative.

Real-time dictation, zero-shot voice cloning, and cinematic video dubbing — all on your desktop.
Open-source, no API keys, fully local. 646 languages.

Stars Release License Issues Discord

Quickstart · Features · Why OmniVoice Studio? · TTS Engines · Contributing · Discord

Download macOS DMG Download Windows MSI Download Linux AppImage Download Debian .deb


OmniVoice Studio — The open-source ElevenLabs alternative

OmniVoice Studio is in active beta. Things may break between releases. For the latest features and fixes, clone the repo and run from source rather than using pre-built installers. Bug reports and PRs are very welcome — open an issue or join Discord.


Features

🎙️ Voice Cloning

3-second clip → mirror any voice.
646 languages, zero-shot.

🎨 Voice Design

Gender, age, accent, pitch, speed,
emotion, dialect — dial it in.

🎬 Video Dubbing

YouTube URL or file → transcribe →
translate → re-voice → MP4.

⌨️ Dictation Widget

⌘+⇧+Space from any app.
Transcribes, auto-pastes, disappears.

🔊 Vocal Isolation

Demucs-powered. Splits speech
from music, keeps the background.

👥 Speaker Diarization

Pyannote + WhisperX.
Auto-identifies who said what.

📦 Batch Queue

Drop 50 videos, walk away.
Progress bars per job.

🤖 MCP Server

Use OmniVoice from Claude,
Cursor, or any MCP client.

🛡️ AI Watermark

AudioSeal (Meta). Invisible,
survives compression.

🔐 100% Local

No keys, no cloud, no accounts.
Your machine only.

⚡ GPU Auto-Detect

CUDA · MPS · ROCm · CPU.
≤8 GB? Auto-offloads.

🧩 Extensible

Subclass TTSBackend,
add any engine in ~50 lines.


Quickstart

Pick your path — from zero-install to full developer setup:

🖥️ Desktop App

Easiest · ~2 min · No dependencies

Download

macOS DMG · Windows MSI · Linux AppImage/deb
Auto-bootstraps Python + models on first launch.

🐳 Docker

One command · ~3 min · Needs Docker

docker pull ghcr.io/debpalash/omnivoice-studio

Pre-built image from GHCR.
CPU + NVIDIA GPU supported.

⚡ From Source

Full control · ~5 min · Needs Bun + Python

git clone → bun install → bun run dev

Hot reload, full codebase access.
Best for contributors.

🖥️ Option 1 — Desktop App

Pre-built installers (~6–8 MB) are on the Releases page. Download, install, launch. The app bootstraps a Python environment and downloads model weights automatically — the splash screen shows progress.

macOS — "app is damaged and can't be opened"

macOS quarantines apps downloaded outside the App Store. After dragging to /Applications:

xattr -cr /Applications/OmniVoice\ Studio.app

Open normally after. One-time fix.

Windows — first launch takes 5–10 minutes

The app bootstraps a Python virtual environment, installs dependencies, and downloads ffmpeg on first run. The splash screen shows each step. Subsequent launches start in seconds.

Linux — AppImage needs FUSE

If FUSE isn’t available, use the .deb package or extract-and-run:

chmod +x OmniVoice.Studio_*.AppImage
./OmniVoice.Studio_*.AppImage --appimage-extract-and-run

🐳 Option 2 — Docker

Pull the pre-built image from GitHub Container Registry:

docker pull ghcr.io/debpalash/omnivoice-studio:latest

Run it:

# CPU mode
docker run -d --name omnivoice \
  -p 127.0.0.1:3900:3900 \
  -v omnivoice-data:/app/omnivoice_data \
  ghcr.io/debpalash/omnivoice-studio:latest

# NVIDIA GPU mode
docker run -d --name omnivoice --gpus all \
  -p 127.0.0.1:3900:3900 \
  -v omnivoice-data:/app/omnivoice_data \
  ghcr.io/debpalash/omnivoice-studio:latest

Or use Docker Compose:

# CPU
docker compose -f deploy/docker-compose.yml up -d

# GPU
docker compose -f deploy/docker-compose.yml --profile gpu up -d

Open localhost:3900 once the health check passes. First run downloads ~4 GB of model weights — progress in docker compose logs -f.

Build from source instead of pulling
docker compose -f deploy/docker-compose.yml up --build -d

Network access: the container binds to 127.0.0.1 only. To expose on your LAN, change the port mapping to "0.0.0.0:3900:3900". OmniVoice ships no authentication — put it behind a reverse proxy with auth (Caddy basic_auth, nginx + htpasswd, Tailscale, etc.).


⚡ Option 3 — From Source

git clone https://github.com/debpalash/OmniVoice-Studio.git && cd OmniVoice-Studio
bun install && bun run dev

Open localhost:3901 and start cloning voices. Hot-reload enabled for both frontend and backend.

bun run desktop    # Build the native desktop app from source
ServiceURLStack
Backendlocalhost:3900FastAPI · 97 endpoints · WhisperX · Demucs · OmniVoice
Frontendlocalhost:3901React · Vite · Waveform timeline · Glassmorphism UI
API Docslocalhost:3900/docsScalar — interactive API reference

First run downloads model weights (~2.4 GB). No account needed. For faster downloads, optionally set HF_TOKEN=hf_... in your environment (get a free token here).

Having issues? Join our Discord for setup help and troubleshooting.


Screenshots

Voice Clone
Voice Clone
Drop a 3-second clip → mirror any voice. 646 languages, zero-shot.
Voice Design
Voice Design
Build new voices from scratch — gender, age, accent, pitch, style.
Video Dubbing
Video Dubbing
Upload or paste a YouTube URL. Transcribe, translate, re-voice, export.
Voice Gallery
Voice Gallery
Search YouTube, browse categories, download clips, build your library.
Settings — Models
Settings → Models
15 models. One-click install. Auto-detects your platform (CUDA / MPS / CPU).
Projects
Projects
Dub projects, voice profiles, generation history, exports — all searchable.
Settings — Logs
Settings → Logs
Live backend, frontend, and Tauri runtime logs. Filter, refresh, clear.

Why OmniVoice Studio?

ElevenLabs charges 5–330/mo and processes your audio on their servers. OmniVoice Studio runs on your hardware, with no usage limits.

ElevenLabsOmniVoice Studio
Pricing5–330/mo, per-character billingFree for personal use · Commercial license for business
Voice Cloning✅ 3s clip✅ 3s clip, zero-shot
Voice Design✅ Gender, age✅ Gender, age, accent, pitch, style, dialect
Languages32646
Video Dubbing✅ Cloud-only✅ Fully local
Data PrivacyAudio sent to cloudNothing leaves your machine
API KeysRequiredNot needed
GPU SupportN/A (cloud)CUDA · Apple Silicon · ROCm · CPU
Desktop App✅ macOS · Windows · Linux
Customizable❌ Closed✅ Fork it, extend it, ship it

OmniVoice Studio gives you professional-grade AI tools without the subscription or the cloud.


System Requirements

MinimumRecommended
OSWindows 10, macOS 12+, Ubuntu 20.04+Any modern 64-bit OS
RAM8 GB16 GB+
VRAM (GPU)4 GB (auto-offloads TTS to CPU)8 GB+ (NVIDIA RTX 3060+)
Disk10 GB free (models + cache)20 GB+ SSD
Python3.10+ (managed by uv)3.11–3.12
GPUOptional — CPU worksNVIDIA CUDA · Apple Silicon MPS · AMD ROCm

On GPUs with ≤8 GB VRAM, OmniVoice automatically offloads TTS to CPU during transcription — no config needed. A dedicated GPU is not required; the entire pipeline runs on CPU (just slower).

TTS Engines

OmniVoice ships a multi-engine TTS backend. The default engine (OmniVoice) is always available; additional engines are opt-in and auto-detected. Switch engines in Settings → TTS Engine or via the OMNIVOICE_TTS_BACKEND env var.

EngineLanguagesCloneInstructLinuxmacOS ARMWindowsLicense
OmniVoice (default)600+✅ CUDA/CPU✅ MPS✅ CUDA/CPUBuilt-in
CosyVoice 39 + 18 dialects✅ CUDA/CPU✅ MPS✅ CUDA/CPUApache-2.0
MLX-Audio (Kokoro, Qwen3-TTS, CSM, Dia, …)MultiVariesVaries✅ NativeVaries
VoxCPM230✅ CUDA/CPU✅ MPS✅ CUDA/CPUApache-2.0
MOSS-TTS-Nano20✅ CUDA/CPU✅ CPU✅ CUDA/CPUApache-2.0
KittenTTSEnglish✅ CPU✅ CPU✅ CPUMIT

CUDA = GPU-accelerated · MPS = Apple Silicon Metal · CPU = runs everywhere, slower for large models · KittenTTS and MOSS-TTS-Nano run realtime on CPU · MLX-Audio is Apple Silicon only.


Architecture

┌─────────────────────────────────────────────────┐
│                  Frontend (React)                │
│  DubTab · VoicePreview · BatchQueue · Gallery    │
├─────────────────────────────────────────────────┤
│                Backend (FastAPI)                  │
│  97 API endpoints · SSE streaming · SQLite       │
├──────────┬──────────┬──────────┬────────────────┤
│ WhisperX │  Demucs  │OmniVoice │   Pyannote     │
│   ASR    │  Source  │   TTS    │  Diarization   │
│          │  Sep.    │          │                │
└──────────┴──────────┴──────────┴────────────────┘
        CUDA / MPS / ROCm / CPU (auto-detected)

Roadmap

✅ Shipped

CategoryFeatures
DubbingFull pipeline (transcribe→translate→synthesize→mux), scene-aware splitting, lip-sync scoring, streaming TTS
VoiceZero-shot cloning, voice design, A/B comparison, voice preview widget, gallery with favorites/tags
AudioDemucs vocal isolation, per-segment gain, selective track export, stem/SRT/VTT/MP3 export
Multi-LangMulti-language batch picker, batch dubbing queue with sequential GPU execution
DiarizationPyannote ML diarization, auto speaker clone extraction, per-speaker voice assignment
InfraDocker deployment, CUDA/MPS/ROCm auto-detect, cuDNN 8 compat, VRAM-aware model offloading
AI ProvenanceAudioSeal invisible watermarking (SynthID-like), video logo overlay, watermark detection API
UXUndo/redo, keyboard shortcuts, drag-and-drop, session persistence, glassmorphism design system
Real-time EventsWebSocket event bus — instant sidebar refresh on data mutations, exponential backoff reconnect
State ManagementZustand store migration — uiSlice, pillSlice, dubSlice, generateSlice, prefsSlice, glossarySlice
DesktopCross-platform Tauri installers (macOS DMG, Windows MSI, Linux deb/AppImage), auto-update infrastructure
Windows HardeningCross-platform log paths, Triton workaround, HF symlink bypass, 300s health check timeout
DictationGlobal system-wide hotkey (⌘+⇧+Space), frameless floating widget, streaming ASR via WebSocket, auto-paste
Batch PipelineFull batch TTS: extract → transcribe → translate → generate → mix → export, with live progress tracking

🔜 Up Next

  • 🎬 Lip-sync v2 — visual speech timing with wav2lip
  • 📖 Audiobook Editor — chapter-aware long-form narration
  • 🌐 Hosted Demo — try OmniVoice without installing anything
  • 🔌 Plugin Marketplace — community-contributed TTS engines and effects

Contributing

We welcome contributions of all kinds — bug fixes, new TTS engine adapters, UI improvements, docs, and translations.


FAQ

Is this really as good as ElevenLabs?
For voice cloning and dubbing, yes — OmniVoice uses a state-of-the-art diffusion TTS model with 646 languages (ElevenLabs supports 32). Quality is comparable for most use cases. Where ElevenLabs wins is in their polished cloud API and pre-made voice library. OmniVoice wins on privacy, cost, language coverage, and customizability.
Does it work on Apple Silicon (M1/M2/M3/M4)?
Yes. MPS acceleration is auto-detected. MLX-optimized Whisper models are available for faster transcription on Apple hardware.
How much VRAM do I need?
4 GB minimum. With ≤8 GB, the TTS model is automatically offloaded to CPU during transcription. With 8+ GB, everything runs on GPU simultaneously. No GPU at all? CPU mode works — just slower (~3× for TTS).
Can I use this commercially?
Personal, educational, internal-team, and non-commercial use is free under FSL-1.1-ALv2. Building a competing product or service on top of OmniVoice Studio requires a commercial license — see License. Pricing tiers coming soon. Each release converts to Apache 2.0 two years after publication.
What languages are supported?
646 languages for TTS via the OmniVoice model. Transcription (WhisperX) supports 99 languages. Translation coverage depends on the target language pair.
Can I add my own TTS engine?
Yes. OmniVoice uses a built-in backend registry. To add an engine in ~50 lines, subclass TTSBackend in backend/services/tts_backend.py and add it to the _REGISTRY dictionary at the bottom. Six engines are built in: OmniVoice, CosyVoice, MLX-Audio (14+ sub-engines), VoxCPM2, MOSS-TTS-Nano, and KittenTTS. See the TTS Engines section for details.

License

OmniVoice Studio is source-available under the Functional Source License (FSL-1.1-ALv2).

Free for personal, educational, research, internal team, and non-commercial use. Each release converts to Apache 2.0 automatically two years after publication.

Business / enterprise users building a competing product or service on top of OmniVoice Studio need a commercial license. Pricing tiers coming soon. For inquiries in the meantime, reach out at [email protected].

See LICENSE for the full terms.


Acknowledgments

OmniVoice Studio is built on the shoulders of exceptional open-source work:

ProjectRole
OmniVoice (k2-fsa)Zero-shot diffusion TTS engine — the core voice synthesis model
WhisperXWord-level speech recognition and alignment
Demucs (Meta)Music source separation for vocal isolation
PyannoteSpeaker diarization — who said what
CTranslate2Optimized Transformer inference on CPU and GPU
AudioSeal (Meta)Invisible neural audio watermarking for AI provenance
TauriNative desktop app framework


If you read this far, you’re our kind of person.
⭐ Star this repo so others can find it too.


Star History

相似文章

@GoJun315: 本地跑的开源 TTS,把 ElevenLabs 干掉了。 Supertonic,完全跑在本地的语音合成模型,不联网、零 API 费用。 - 仅 99M 参数,M4 Pro 上比实时快 167 倍,树莓派也能跑 - 支持 31 种语言,覆盖…

X AI KOLs Timeline

Supertonic is a lightning-fast, on-device TTS model with 99M parameters, supporting 31 languages. It runs locally with no API costs, outperforms cloud TTS on accuracy for numbers, phone numbers, and technical terms, and can be installed via Python, Node.js, Rust, Go, and more.

@Honcia13: 开源TTS直接卷疯了!园区诈骗又有新武器? 清华 OpenBMB 刚刚放出 VoxCPM2: 200亿参数 + 200万小时多语言数据训练,48kHz录音棚级音质! 最狠的是——完全不用Tokenizer,直接在连续潜空间做扩散自回归,细…

X AI KOLs Timeline

清华大学 OpenBMB 发布了 VoxCPM2,这是一个拥有 200 亿参数的开源多语言 TTS 模型,支持无需 Tokenizer 的连续潜空间扩散自回归生成,具备 48kHz 录音棚级音质和强大的声音克隆与设计能力。

k2-fsa/OmniVoice

Hugging Face Models Trending

OmniVoice 是一款大规模多语言零样本文本转语音模型,支持超过 600 种语言,基于扩散语言模型架构构建,具备快速推理和语音克隆能力。

@GitHub_Daily: GitHub 上一款专为 Mac 打造的纯本地语音转文字开源工具:MacParakeet,识别准确率颇高。 支持直接拖拽音视频文件,或者贴个 YouTube 链接,就能快速输出带时间戳和说话人标签的文稿。 还能同时录制电脑系统声音和麦克风…

X AI KOLs Timeline

MacParakeet is a new open-source Mac application that provides fast, fully local voice transcription using Apple's Neural Engine and NVIDIA's Parakeet model, ensuring privacy by keeping audio data on-device.