@jerryjliu0: Agents + file sandboxes are all in the range in 2026 This is a nifty reference implementation by @itsclelia showing you…

X AI KOLs Following Tools

Summary

This reference implementation demonstrates how to run an LLM agent securely within a local sandbox to process and analyze various document types using Rust, LiteParse, and microsandbox. The open-source CLI leverages OpenAI's GPT models and native bash commands to perform file retrieval and analysis in an isolated environment.

Agents + file sandboxes are all in the range in 2026 This is a nifty reference implementation by @itsclelia showing you how to run your agent over a collection of docs (PDFs, images, Office) with full access to a secure, local-first sandbox. Uses LiteParse for extremely fast parsing of all these docs Uses agent harness + native bash commands available to the sandbox (@microsandbox ) to do retrieval Check it out! Reference repo: https://github.com/run-llama/sandboxed-lit… LiteParse: https://github.com/run-llama/liteparse…
Original Article
View Cached Full Text

Cached at: 05/11/26, 10:47 PM

Agents + file sandboxes are all in the range in 2026 This is a nifty reference implementation by @itsclelia showing you how to run your agent over a collection of docs (PDFs, images, Office) with full access to a secure, local-first sandbox. Uses LiteParse for extremely fast parsing of all these docs Uses agent harness + native bash commands available to the sandbox (@microsandbox ) to do retrieval Check it out! Reference repo: https://github.com/run-llama/sandboxed-lit… LiteParse: https://github.com/run-llama/liteparse…


run-llama/sandboxed-lit

Source: https://github.com/run-llama/sandboxed-lit

sandboxed-lit

A small Rust CLI that runs an LLM agent inside a microsandbox VM. The agent uses OpenAI’s GPT models via agent-sdk and has tools to list files, read files (parsing PDFs / images / Office docs through liteparse), and run bash commands, all confined to the sandbox.

How it works

  • src/sandbox.rs — Creates (or reuses) a microsandbox named lit-sandbox from the ghcr.io/run-llama/liteparse:main image with 2 CPUs and 1 GB of RAM, working dir /app/, and a bind mount at /app/data. Exposes:
    • create_or_get_sandbox(volume) — boots / attaches to the sandbox.
    • list_files(sandbox, dir) — recursively lists files under /app/data.
    • read_file(sandbox, path) — reads a file; routes PDFs, images and Office docs through lit parse for structured extraction.
    • run_bash_command(sandbox, cmd, args) — runs an arbitrary command inside the sandbox and returns {stdout, stderr}.
  • src/agent.rs — Wraps those functions as three agent-sdk tools (list_files, read_file, bash), registers them, builds an OpenAI-backed agent, streams events to the terminal with colored output, and runs until completion.
  • src/main.rs — A clap CLI that parses the prompt and optional mount path and calls agent::run_agent.

Requirements

  • Rust (edition 2024)
  • A running microsandbox host (see the microsandbox docs)
  • An OPENAI_API_KEY environment variable

Build

cargo build --release

Usage

sandboxed-lit --prompt "<your prompt>" [--volume <host-path>]

Options:

FlagShortDescription
--prompt-pPrompt to send to the agent (required).
--volume-vHost directory to mount at /app/data inside the sandbox. Defaults to the current directory.

Examples

Run with the current directory mounted:

export OPENAI_API_KEY=sk-...
sandboxed-lit -p "Summarize every PDF in the working directory."

Mount a specific folder:

sandboxed-lit \
  -p "List the files, then read report.pdf and extract the key findings." \
  -v /Users/me/documents

Files in the mounted directory are visible to the agent at /app/data/....

Similar Articles

The next evolution of the Agents SDK

OpenAI Blog

OpenAI announces an update to its Agents SDK, introducing a model-native harness and native sandbox execution to help developers build production-ready AI agents with better file handling and safety controls.