local-ai

Tag

Cards List
#local-ai

@mronge: https://x.com/mronge/status/2052846432969720202

X AI KOLs Timeline · 18h ago Cached

A practical guide on setting up an always-on AI agent on a Mac mini, covering hardware selection, cloud vs. local AI model tradeoffs, and agent system choices for automating tasks like sales reporting and social media suggestions.

0 favorites 0 likes
#local-ai

CyberSecQwen-4B: Why Defensive Cyber Needs Small, Specialized, Locally-Runnable Models

Hugging Face Blog · 20h ago Cached

CyberSecQwen-4B is a small, specialized 4B parameter model fine-tuned for defensive cybersecurity tasks, designed to run locally on a single GPU, addressing privacy, cost, and air-gapped deployment needs.

1 favorites 1 likes
#local-ai

@oliviscusAI: Someone just open-sourced a desktop app that generates 3D models from images and runs 100% locally. It's called Modly. …

X AI KOLs Timeline · 21h ago

Modly is an open-source desktop app that generates fully textured 3D meshes from images, running 100% locally on your GPU with pluggable AI model extensions.

0 favorites 0 likes
#local-ai

Built a JARVIS-style assistant with wake word, vision mode, local voice cloning, and LLM-generated system commands

Reddit r/ArtificialInteligence · 22h ago

A developer built a JARVIS-style personal assistant called CYBER with wake word activation, local voice cloning via XTTS v2, vision mode, and LLM-generated system commands, all running locally without cloud dependencies.

0 favorites 0 likes
#local-ai

4GB "Gemini Nano" model GGUF anyone?

Reddit r/LocalLLaMA · yesterday

A user inquires about the specific identity of a ~4GB AI model (likely Gemini Nano) silently downloaded by Chrome for on-device features, and requests a GGUF version for local execution via llama.cpp.

0 favorites 0 likes
#local-ai

@rumgewieselt: Now its getting crazy ... 3x 1080 Ti (Pascal, 33GB VRAM) Qwen 3.6 27B MTP with 196K TurboQuant ~28-30 t/s consistently

X AI KOLs Timeline · yesterday Cached

A user demonstrates successful local inference of a 27B parameter Qwen model across three GTX 1080 Ti GPUs, achieving approximately 28-30 tokens per second using TurboQuant optimization.

0 favorites 0 likes
#local-ai

Kuku: open source

Product Hunt · yesterday

Kuku is introduced as an open-source tool designed to serve as a local second brain for managing various AI interactions.

0 favorites 0 likes
#local-ai

I built a local AI companion with GWT, IIT proxy, ChromaDB hybrid retrieval, and Ollama fallback — here's every architectural decision I made and why

Reddit r/artificial · yesterday

The author shares a locally runnable AI companion built with Python, Gemini, and Ollama, featuring a custom cognitive architecture based on Global Workspace Theory and an Integrated Information Theory proxy for personality modeling.

0 favorites 0 likes
#local-ai

I've created the fastest local AI engine for Apple Silicon. Optimised for agentic use.

Reddit r/LocalLLaMA · yesterday

The author announces the release of 'lightning-mlx', a local AI engine optimized for Apple Silicon that achieves high token speeds for coding agents and tool-calling workflows.

0 favorites 0 likes
#local-ai

Multi-Token Prediction (MTP) for LLaMA.cpp - Gemma 4 speedup by 40%

Reddit r/LocalLLaMA · yesterday

A new implementation of Multi-Token Prediction (MTP) in llama.cpp achieves a 40% speedup for Gemma 4 models, tested on a MacBook Pro M5Max. The post provides links to quantized GGUF models and the patched source code.

0 favorites 0 likes
#local-ai

eTPS Site Plan – Simple Leaderboard + What You’ll Actually See

Reddit r/artificial · yesterday

The author introduces the site plan for effectiveTPS, a tool designed to compare local AI models using a new 'effective TPS' metric alongside raw speed and latency. It aims to provide a simple leaderboard that highlights useful output quality over raw marketing numbers.

0 favorites 0 likes
#local-ai

Reefy

Product Hunt · yesterday

Reefy allows users to turn any PC into a private AI machine, running AI locally on their hardware.

0 favorites 0 likes
#local-ai

Nemotron Labs: What OpenClaw Agents Mean for Every Organization

NVIDIA Blog · 2026-04-30 Cached

OpenClaw, an open-source persistent AI assistant, has become the most-starred GitHub project, sparking debate over security and autonomy. NVIDIA is collaborating to enhance security and releasing NemoClaw as a secure reference implementation.

0 favorites 0 likes
#local-ai

poolside/Laguna-XS.2

Hugging Face Models Trending · 2026-04-23 Cached

Poolside releases Laguna XS.2, a 33B parameter MoE model with 3B activated parameters designed for agentic coding and local deployment on Macs with 36GB RAM.

0 favorites 0 likes
#local-ai

Running Qwen3.6-35B-A3B Locally for Coding Agent: My Setup & Working Config

Reddit r/LocalLLaMA · 2026-04-22

A detailed guide for running the 35B-parameter Qwen3.6 model locally on Apple Silicon with llama.cpp to power the pi coding agent, including optimized configuration flags and sampling parameters.

0 favorites 0 likes
#local-ai

@svpino: Hermes with Gemma 4 or Qwen 3.5 is literally the best combo you can run locally on your computer. You've got to give th…

X AI KOLs Following · 2026-04-22 Cached

Developer claims Hermes fine-tunes of Gemma 4 and Qwen 3.5 deliver the best local LLM performance, suggesting they rival paid BigAI models.

0 favorites 0 likes
#local-ai

Doing real coding work locally for the first time

Reddit r/LocalLLaMA · 2026-04-21

Developer achieves productive local agentic coding with Qwen3.6-35B 4-bit MLX and pi.dev tool, completing real tickets efficiently on current hardware.

0 favorites 0 likes
#local-ai

My 7900XTX is autonomous with qwen 3.6 👀 wow 😍

Reddit r/LocalLLaMA · 2026-04-20

A user demonstrates Qwen 3.6 running autonomously on an AMD 7900 XTX GPU, locally creating an Android app — described as a sci-fi reality achieved today.

0 favorites 0 likes
#local-ai

From RTX to Spark: NVIDIA Accelerates Gemma 4 for Local Agentic AI

NVIDIA Blog · 2026-04-02 Cached

NVIDIA and Google collaborate to optimize Gemma 4 models for local deployment across RTX GPUs, DGX Spark, and Jetson devices, enabling efficient on-device agentic AI with support for reasoning, coding, multimodal capabilities, and 35+ languages.

0 favorites 0 likes
#local-ai

GGML and llama.cpp join HF to ensure the long-term progress of Local AI

Hugging Face Blog · 2026-02-20 Cached

GGML and llama.cpp have joined Hugging Face to ensure long-term sustainability of local AI development. Georgi Gerganov's team will maintain full autonomy over the projects while receiving resources to scale community support and improve integration between llama.cpp inference and transformers model definitions.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback