@jerryjliu0: Agents + file sandboxes are all in the range in 2026 This is a nifty reference implementation by @itsclelia showing you…
Summary
This reference implementation demonstrates how to run an LLM agent securely within a local sandbox to process and analyze various document types using Rust, LiteParse, and microsandbox. The open-source CLI leverages OpenAI's GPT models and native bash commands to perform file retrieval and analysis in an isolated environment.
View Cached Full Text
Cached at: 05/11/26, 10:47 PM
Agents + file sandboxes are all in the range in 2026 This is a nifty reference implementation by @itsclelia showing you how to run your agent over a collection of docs (PDFs, images, Office) with full access to a secure, local-first sandbox. Uses LiteParse for extremely fast parsing of all these docs Uses agent harness + native bash commands available to the sandbox (@microsandbox ) to do retrieval Check it out! Reference repo: https://github.com/run-llama/sandboxed-lit… LiteParse: https://github.com/run-llama/liteparse…
run-llama/sandboxed-lit
Source: https://github.com/run-llama/sandboxed-lit
sandboxed-lit
A small Rust CLI that runs an LLM agent inside a microsandbox VM. The agent uses OpenAI’s GPT models via agent-sdk and has tools to list files, read files (parsing PDFs / images / Office docs through liteparse), and run bash commands, all confined to the sandbox.
How it works
src/sandbox.rs— Creates (or reuses) a microsandbox namedlit-sandboxfrom theghcr.io/run-llama/liteparse:mainimage with 2 CPUs and 1 GB of RAM, working dir/app/, and a bind mount at/app/data. Exposes:create_or_get_sandbox(volume)— boots / attaches to the sandbox.list_files(sandbox, dir)— recursively lists files under/app/data.read_file(sandbox, path)— reads a file; routes PDFs, images and Office docs throughlit parsefor structured extraction.run_bash_command(sandbox, cmd, args)— runs an arbitrary command inside the sandbox and returns{stdout, stderr}.
src/agent.rs— Wraps those functions as threeagent-sdktools (list_files,read_file,bash), registers them, builds an OpenAI-backed agent, streams events to the terminal with colored output, and runs until completion.src/main.rs— AclapCLI that parses the prompt and optional mount path and callsagent::run_agent.
Requirements
- Rust (edition 2024)
- A running microsandbox host (see the microsandbox docs)
- An
OPENAI_API_KEYenvironment variable
Build
cargo build --release
Usage
sandboxed-lit --prompt "<your prompt>" [--volume <host-path>]
Options:
| Flag | Short | Description |
|---|---|---|
--prompt | -p | Prompt to send to the agent (required). |
--volume | -v | Host directory to mount at /app/data inside the sandbox. Defaults to the current directory. |
Examples
Run with the current directory mounted:
export OPENAI_API_KEY=sk-...
sandboxed-lit -p "Summarize every PDF in the working directory."
Mount a specific folder:
sandboxed-lit \
-p "List the files, then read report.pdf and extract the key findings." \
-v /Users/me/documents
Files in the mounted directory are visible to the agent at /app/data/....
Similar Articles
@llama_index: Ever wished your agent could read PDFs, images, and Office documents as easily as plain text? Or combine the safety of …
sandboxed-lit is a Rust CLI agent that parses PDFs, images, and Office documents securely via LiteParse and microsandbox, combining local file access with a sandboxed Bash environment.
@TencentAI_News: We just open-sourced Cube Sandbox! An instant, concurrent, secure and lightweight sandbox runtime for AI Agents. Built …
Tencent open-sourced Cube Sandbox, a lightweight Rust/KVM-based sandbox runtime for AI agents that cold-starts in under 60 ms with <5 MB overhead.
Tencent Cloud open-sources Cube Sandbox, an AI-agent sandbox written in Rust under Apache 2.0
Tencent Cloud’s Rust-based Cube Sandbox offers an E2B-compatible, isolated runtime for LLM-generated code.
Someone built an agent that tries any github repo in a sandbox and records it.
A weekend project that uses an AI agent to test any GitHub repository in a sandbox environment and records the session as a video to help users evaluate code quality.
The next evolution of the Agents SDK
OpenAI announces an update to its Agents SDK, introducing a model-native harness and native sandbox execution to help developers build production-ready AI agents with better file handling and safety controls.