on-device

Tag

Cards List
#on-device

AI Rep Counter On-Device - Workout Tracker & Form Coach

Reddit r/AI_Agents · 3h ago

AI Rep Counter is an on-device iOS app that uses AI to count reps and analyze workout form via the iPhone camera, offering privacy modes, workout metrics, and widgets.

0 favorites 0 likes
#on-device

@akshay_pachaar: this TTS model generates speech 167x faster than you can hear it. Supertonic is an on-device TTS engine that runs via O…

X AI KOLs Following · 7h ago Cached

Supertonic is a new open-source TTS engine that runs on-device via ONNX, supporting 31 languages and outperforming ElevenLabs in speed, even on a Raspberry Pi without a GPU.

0 favorites 0 likes
#on-device

Codex on your phone

Reddit r/singularity · 23h ago

An implementation or adaptation of OpenAI's Codex model for mobile devices, enabling code generation and assistance on smartphones.

0 favorites 0 likes
#on-device

Got local Qwen 3.5/3.6 generating meeting summaries entirely offline on an M4 Max. Demo with Wi-Fi off. This is the future.

Reddit r/LocalLLaMA · yesterday

The Hedy meeting app now supports fully offline AI summaries using local models like Qwen and Gemma via llama.cpp, with options for bring-your-own-model and hardware-aware model selection. The update enables Wi-Fi-free operation on Apple Silicon and Windows GPUs, though cloud still offers higher speed and quality.

0 favorites 0 likes
#on-device

@GoJun315: Open-source TTS that runs locally and beats ElevenLabs. Supertonic, a speech synthesis model that runs entirely on-device, no internet required, zero API costs. - Only 99M parameters, 167x faster than real-time on M4 Pro, runs on Raspberry Pi - Supports 31 languages, covering…

X AI KOLs Timeline · yesterday Cached

Supertonic is a lightning-fast, on-device TTS model with 99M parameters, supporting 31 languages. It runs locally with no API costs, outperforms cloud TTS on accuracy for numbers, phone numbers, and technical terms, and can be installed via Python, Node.js, Rust, Go, and more.

0 favorites 0 likes
#on-device

@googlegemma: Gemma 4 up to 3x faster, directly in your phone! Check out the difference Speculative Decoding makes! Multi-Token Predi…

X AI KOLs Timeline · 2026-05-07 Cached

Google's Gemma 4 achieves up to 3x faster inference speeds through speculative decoding and multi-token prediction, enabling efficient on-device deployment.

0 favorites 0 likes
#on-device

BlankOut

Product Hunt · 2026-04-21

BlankOut is a tool that redacts sensitive content in documents on-device before sharing with AI services.

0 favorites 0 likes
#on-device

@rohanpaul_ai: Gemma 4 (specifically its edge-optimized E2B and E4B variants) running fully offline on an iPhone via apps like Locally…

X AI KOLs Following · 2026-04-19 Cached

Google’s Gemma 4 E2B/E4B quantized variants now run fully offline on iPhone via apps like Locally AI, leveraging the Apple Neural Engine for on-device inference.

0 favorites 0 likes
#on-device

Announcing Gemma 3n preview: Powerful, efficient, mobile-first AI

Google DeepMind Blog · 2025-05-20 Cached

Google announces Gemma 3n preview, a mobile-first open AI model optimized for on-device inference on phones, tablets, and laptops. Built on a new architecture developed with hardware partners like Qualcomm and MediaTek, Gemma 3n uses innovations like Per-Layer Embeddings to achieve fast performance with minimal memory footprint (2-3GB), while supporting multimodal capabilities.

0 favorites 0 likes
← Back to home

Submit Feedback