I’m building an open-source LLM app for writing/RP and recently added desktop pets + AI agents

Reddit r/ArtificialInteligence Tools

Summary

The author introduces Vellium, an open-source cross-platform desktop application for interacting with LLMs, featuring new desktop widgets and a visual interface for AI agents that support MCP servers and file manipulation.

Hey everyone, I’m building Vellium, an open-source cross-platform app for working with LLMs in desktop workflows. The goal is to make it easier to use different models for writing, automation, coding help, file-based workflows, and agent experiments from one interface. The latest update adds two larger features. The first one is desktop widgets. You can create a small interactive AI widget and place it on your desktop above other windows. It can react visually, expose a small hover interface, and let you send quick messages without opening the full app window. This part is still experimental, so I’m currently looking for feedback on whether this kind of lightweight desktop interaction is actually useful in daily LLM workflows. The second feature is Agents. The app now has an optional Agents tab, disabled by default, which can be enabled in settings. The idea is to provide a more visual interface for CLI-like agent workflows. Agents can read documents, inspect folders, run terminal commands, help with code, edit files, and use connected tools. Vellium also supports MCP servers. If you already have MCP servers connected in the app, you can attach them to agents and use them inside the same workflow. Apart from these bigger additions, I also fixed a lot of bugs and added smaller improvements. For example, chat mode now supports custom fields, so users can add or remove fields depending on their workflow. The project is still evolving, and some parts are experimental, but I’d appreciate feedback from people interested in open-source LLM apps, agent interfaces, MCP workflows, or desktop AI tooling. GitHub: [https://github.com/tg-prplx/vellium](https://github.com/tg-prplx/vellium)
Original Article

Similar Articles

vllm-project/vllm v0.19.1

GitHub Releases Watchlist

vLLM v0.19.1 release - a fast and easy-to-use open-source library for LLM inference and serving with state-of-the-art throughput, supporting 200+ model architectures and diverse hardware including NVIDIA/AMD GPUs and CPUs.

Open WebUI Desktop Released!

Reddit r/LocalLLaMA

Open WebUI Desktop launches as a native app letting users run local LLMs or connect to remote servers without Docker or terminal setup, featuring offline operation, system-wide voice input, and floating chat overlay.

vllm-project/vllm v0.20.0

GitHub Releases Watchlist

vLLM v0.20.0 is released, an open-source library for high-throughput LLM inference and serving, featuring PagedAttention and support for various hardware architectures.