Frona - self-hosted personal AI assistant

Reddit r/AI_Agents Products

Summary

Frona is a newly released self-hosted personal AI assistant built in Rust, emphasizing security through sandboxed environments, a unified policy engine, and vault-backed credential management.

Hey, Since LLM tool calling became a thing, the dominant pattern has been: ship an AI assistant that can execute code, browse the web, and hit your APIs, and figure out the security story later. Frona started as a pushback against that pattern. Frona is a personal AI assistant. You create autonomous agents that browse the web, run code, build applications, make phone calls, connect to messaging channels, delegate work to each other, and remember context across conversations, all within sandboxed environments with controlled access to your files, network, and credentials. You give them a task and they figure out how to get it done. You deploy it on your own infrastructure. The platform is built from the ground up with security in mind, and the engine is written in Rust, so it's fast, lightweight, and runs everything in a single process. It's out now. Thought this community would appreciate the approach since it's built for self-hosters. It's a finished product, not a kit you spend a weekend assembling. Every tool call, channel message, and sandbox decision goes through one policy engine. Credentials are vault-backed, sandboxes are per-principal, SSO is built in, MCP servers are first-class. You don't write auth glue, harden containers, hand-roll vault integrations, or duct-tape channels onto agents at 2am. It's all there on day one. Think of it as a more user-friendly OpenClaw or Hermes Agent, but built on top of security from day one instead of duct-taping it on later or punting the problem to you to figure out. There's a full comparison vs. OpenClaw and Hermes Agent (see comments for the link) if you want the long version. The short version of what makes it different: **Sandbox without a container per agent** OpenClaw and Hermes both reach for Docker when they sandbox, so each new agent (and sometimes each new MCP server) becomes a piece of container infra you have to manage. Frona runs as a single Rust process that spawns sandboxed child processes for the work, one per CLI tool call, one per MCP server, one per deployed app, with syscall-level filtering applied per principal. With 10 agents and 5 MCP servers, you have one engine and a handful of sandboxed children, not 10 containers. And it's on by default. The engine refuses to start if the sandbox can't initialize. **One policy engine for everything** Tool access, filesystem rules, network destinations, port binds, channel authorization, signal handling, all written in the same policy language. "This MCP server can only reach `api.github.com:443`", "this channel only accepts inbound from these paired numbers", "this agent can use the shell tool only when delegated by the system agent". Those are one-line rules, not custom code. Per-agent network is full / restricted to specific hosts / fully offline, same for filesystem paths, same for resource limits. **Dual-LLM pattern for inbound messages** Inbound channel messages from external senders are untrusted input. That's exactly where prompt injection lands. Frona's dispatcher implements Simon Willison's Dual LLM pattern: a quarantined LLM with a stripped-down tool registry handles untrusted content (it can only tag and end its task, no replies, no general tools), and a privileged LLM only sees content that policy has cleared. So a hostile SMS can't trick the responding agent into leaking data or running tools. **Vault-backed credentials, never in chat** No pasting API keys into prompts and hoping the model forgets them (it won't). Agents request credentials, you get a notification with what they want and why, you approve with a time limit (one-time, hours, days, permanent). Local credentials are AES-256-GCM at rest. Or plug into your existing vault: 1Password, Bitwarden (incl. self-hosted), HashiCorp Vault, KeePass, Keeper. Sandboxed processes get ephemeral tokens scoped to that one process and lifetime. Leak the token, blast radius is bounded. **MCP, but token-efficient** MCP servers are first-class and each runs in its own sandbox with its own policies. The default *bridge mode* exposes all your MCP servers behind a single CLI tool to the LLM instead of advertising every MCP tool's schema individually. On an agent with 5 MCP servers and 60+ tools, that's thousands of tokens saved per turn. Context goes to your task, not to JSON schemas the model doesn't need yet. **Persistent browser sessions** Agents get named browser profiles that keep cookies, local storage, and sessions across conversations. Log in once, stay logged in. Hit a CAPTCHA or 2FA and it pauses, hands you a debugger link, and resumes when you're done. **Other stuff worth mentioning** * BYO LLM: Ollama, Anthropic, OpenAI, Groq, DeepSeek, Gemini, and about a dozen more * Simple deployment: 3 containers via Docker Compose: Frona, Browserless (browser automation), SearXNG (private web search) * Multi-user with SSO: Google, Okta, Keycloak, Authentik, any OIDC * Apps: ask the agent to build you a tool/dashboard/integration, approve, Frona serves it instantly behind the same sandbox + policy machinery * Memory + Skills: facts that survive across conversations, plus reusable instruction packages you can scope per-agent * Signals: agents can pause a conversation and wait for a matching inbound (verification code, reply, class of message), then resume automatically when it arrives * Channels: web UI, Telegram, SMS today; more on the way * Phone calls: outbound voice via Twilio * API access: Personal Access Tokens for your own automations * Written in Rust: low footprint, fast streaming. Obligatory Rust mention :) Things are still being polished. Next up: a plugin framework so you can extend the platform without touching core, and more channel adapters beyond Telegram and SMS. Would love feedback from folks who actually self-host their tools. What would you want hooked up first? If you don't have access to all the frontier models, Haiku 4.5 is a solid pick for most tasks. Cheap and surprisingly capable when you give it proper tool feedback.
Original Article

Similar Articles

Phrony

Product Hunt

Phrony is a new product designed to help developers ship AI agents while reducing operational burden.

Introducing OpenAI Frontier

OpenAI Blog

OpenAI is introducing Frontier, a new enterprise platform designed to help organizations build, deploy, and manage AI agents at scale. The platform aims to bridge the gap between AI model capabilities and real-world enterprise deployment by providing agents with shared context, onboarding, feedback mechanisms, and clear permissions.

How (and why) we rewrote our production C++ frontend infrastructure in Rust

Lobsters Hottest

NearlyFreeSpeech.NET rewrote their production C++ frontend infrastructure (nfsncore) in Rust, a critical system that handles routing, caching, and access control for all incoming requests. The migration was motivated by Rust's safety guarantees, performance, ecosystem strength, and the aging C++ codebase's limitations.

Nemotron Labs: What OpenClaw Agents Mean for Every Organization

NVIDIA Blog

OpenClaw, an open-source persistent AI assistant, has become the most-starred GitHub project, sparking debate over security and autonomy. NVIDIA is collaborating to enhance security and releasing NemoClaw as a secure reference implementation.