local-llm

Tag

Cards List
#local-llm

I was tired of "babysitting" my AI. So I spent 6 months building a C++20 Autonomous Software House that ships while I sleep

Reddit r/AI_Agents · 6h ago

Neon Sovereign is a native C++20/Vulkan autonomous software development workstation that uses a multi-agent swarm to execute software briefs end-to-end, running local LLM weights via Ollama/GGUF with no cloud dependency. The creator is seeking systems engineers and early testers as it enters Active Alpha.

0 favorites 0 likes
#local-llm

@garrytan: Downloading now... 1M token context window with supposedly usable coding agent capability all on a 128GB Macbook Pro is

X AI KOLs Following · 8h ago Cached

Garry Tan highlights a model with a 1M token context window and coding agent capabilities running locally on a 128GB MacBook Pro, expressing excitement about the milestone.

0 favorites 0 likes
#local-llm

Those of you who like Gemma4 models - how are you guys using them?

Reddit r/LocalLLaMA · 12h ago

A developer shares their mixed experience running Gemma4 and Qwen locally for coding tasks, noting issues with tool integration, loop handling, and task completion while asking the community for better usage strategies.

0 favorites 0 likes
#local-llm

@AIExplorerTim: Someone just released a tool that converts PDFs into clean, structured Markdown at speeds up to 100 pages/second. No GPU required. No API costs. No messy parsing. Just raw, usable data. It handles with ease: • Tables → Perfectly ex…

X AI KOLs Timeline · 12h ago Cached

OpenDataLoader is an open-source tool that converts PDFs into structured Markdown and JSON, supporting local processing speeds of up to 100 pages/second without requiring a GPU or incurring API costs, designed specifically for RAG pipelines and PDF accessibility automation.

0 favorites 0 likes
#local-llm

@leftcurvedev_: Anyone with 8GB or 12GB VRAM setups needs to understand that "-ncmoe" is the key flag to boost performance on llama.cpp…

X AI KOLs Timeline · 20h ago

Explains how the -ncmoe flag in llama.cpp improves performance for MoE models like Qwen3.6 35B A3B on limited VRAM (8-12GB) by offloading some expert layers to CPU+RAM, with benchmarks showing up to 5x speedup on an RTX 3070Ti.

0 favorites 0 likes
#local-llm

Testing Local LLMs in Practice: Code Generation, Quality vs. Speed

Reddit r/LocalLLaMA · 20h ago

The author built a benchmark harness to evaluate local LLMs for autonomous Go code generation, focusing on log parser generation for SIEM pipelines, and published results comparing quality vs. speed.

0 favorites 1 likes
#local-llm

@mitsuhiko: Nice! @antirez merged my tool parameter streaming changes into ds4. Means you can now just install the pi extension and…

X AI KOLs Timeline · yesterday Cached

Developer mitsuhiko released an open-source Pi extension that integrates with ds4 to streamline running DeepSeek V4 Flash locally on macOS. The tool automates model downloads, quantization selection based on RAM, and server lifecycle management for a seamless local LLM experience.

0 favorites 0 likes
#local-llm

LaTA: A Drop-in, FERPA-Compliant Local-LLM Autograder for Upper-Division STEM Coursework

arXiv cs.AI · yesterday Cached

The paper introduces LaTA, an open-source, FERPA-compliant local LLM autograder for upper-division STEM courses that runs on-premises hardware. It reports successful deployment at Oregon State University with improved student performance and high grading accuracy.

0 favorites 0 likes
#local-llm

@rohanpaul_ai: atomic[.]chat just made Gemma 4 26B faster inside LLaMA.cpp. making token generation about 40% faster in its MacBook Pr…

X AI KOLs Following · yesterday

atomic.chat has optimized Gemma 4 26B inference in LLaMA.cpp, achieving ~40% faster token generation on MacBook Pro M5 Max using Multi-Token Prediction (MTP) speculative decoding. This is a notable win for local AI users running desktop apps, coding agents, and private on-device assistants.

0 favorites 0 likes
#local-llm

Extracted MTP tensor GGUFs - smaller donor models for grafting.

Reddit r/LocalLLaMA · yesterday

The author provides extracted GGUF files containing only MTP tensors for Qwen3.6 models, allowing users to graft tensors with a significantly reduced download size compared to full model files.

0 favorites 0 likes
#local-llm

Are local models becoming “good enough” faster than expected?

Reddit r/LocalLLaMA · yesterday

The article discusses the growing viability of local AI models for everyday tasks, suggesting a shift toward hybrid architectures that optimize for cost and latency rather than relying solely on frontier cloud models.

0 favorites 0 likes
#local-llm

DELIGHT – self-hosted AI engineering autopilot: local LLM + browser farm + repo graph + P2P compute

Reddit r/AI_Agents · yesterday

DELIGHT is a self-hosted AI engineering autopilot that combines local LLMs, a browser farm, and a semantic repo graph to automate development tasks without sending data to the cloud.

0 favorites 0 likes
#local-llm

AMD to release slottable GPU

Reddit r/LocalLLaMA · yesterday

AMD is set to release new slottable PCIe-based Instinct GPUs aimed at the enterprise AI market, offering a potential new hardware option for local LLM deployment.

0 favorites 0 likes
#local-llm

Openclaw as sys admin

Reddit r/openclaw · yesterday

The author describes using Openclaw as a system administrator on Linux servers, leveraging a local Qwen 3.6 27b model for security audits, updates, and deploying kiosk mode tasks without external internet access.

0 favorites 0 likes
#local-llm

DavidAU/Qwen3.6-27B-Heretic-Uncensored-FINETUNE-NEO-CODE-Di-IMatrix-MAX-GGUF

Hugging Face Models Trending · 2026-04-29 Cached

A community-finetuned, uncensored version of the Qwen 3.6 27B model featuring high-precision GGUF quantizations.

0 favorites 0 likes
#local-llm

froggeric/Qwen-Fixed-Chat-Templates

Hugging Face Models Trending · 2026-04-23 Cached

This repository provides fixed Jinja chat templates for Qwen 3.5 and 3.6, addressing rendering errors, token waste, and missing features in the official templates for engines like LM Studio and llama.cpp.

0 favorites 0 likes
#local-llm

Qwen3.6 35B + the right coding scaffold got my local setup to 9/10 on real Go tasks

Reddit r/LocalLLaMA · 2026-04-23

A developer achieved 9/10 pass rate on real Go tasks using a routed local setup built around Qwen3.6 35B and the little-coder scaffold, showing strong local performance when paired with the right tooling.

0 favorites 0 likes
#local-llm

Qwen 3.6 is actually useful for vibe-coding, and way cheaper than Claude

Reddit r/LocalLLaMA · 2026-04-23

User demonstrates Qwen 3.6 27B/35B running locally with llama-server cuts Claude Code API costs from $142 to <$4 for 8-hour vibe-coding session, achieving 30-day payback on $4500 dual-RTX 3090 rig.

0 favorites 0 likes
#local-llm

I tested Qwen3.6-27B, Qwen3.6-35B-A3B, Qwen3.5-27B and Gemma 4 on the same real architecture-writing task on an RTX 5090

Reddit r/LocalLLaMA · 2026-04-23

A hands-on benchmark of four local LLMs—Qwen3.6-27B, Qwen3.6-35B, Qwen3.5-27B and Gemma 4—on a 20k-token architecture-writing task shows Qwen3.6-27B delivering the best overall balance of clarity, completeness and usefulness on an RTX 5090.

0 favorites 0 likes
#local-llm

What impedes apps using AI to make the user’s device the server running a local LLM?

Reddit r/singularity · 2026-04-22

A user reflects on why more apps don’t run local LLMs directly on phones, noting Gemma 2-4B models already work offline and could eliminate server costs while maintaining near-GPT-4o quality.

0 favorites 0 likes
Next →
← Back to home

Submit Feedback