@QingQ77: Open-source self-hosted AI Security Operations Center providing alert fusion, purple team exercises, AI-assisted triage, and MITRE ATT&CK investigation analysis. https://github.com/beenuar/AiSOC AiSOC packages security event collection, correlation analysis, AI investigation, and SOC console into a self-hosted stack...
Summary
AiSOC is an open-source self-hosted AI Security Operations Center tool built on LangGraph. It integrates alert fusion, AI-assisted triage, and MITRE ATT&CK investigation analysis, supporting full-chain reasoning log playback and flexible deployment across multiple environments.
View Cached Full Text
Cached at: 05/12/26, 10:55 AM
Open-source self-hosted AI Security Operations Center, offering alert fusion, purple team exercises, AI-assisted triage, and MITRE ATT&CK investigation & analysis. https://github.com/beenuar/AiSOC AiSOC packages security event ingestion, correlation analysis, AI-driven investigation, and SOC console into a single self-hosted stack. The orchestrator consists of only ~600 lines of LangGraph code, with every reasoning step written to the Investigation Ledger for on-demand replay. It features 800+ native rules plus 6,000+ imported rules, with 26 connectors covering mainstream data sources such as EDR, SIEM, cloud, identity, and SaaS. Deployment options range from Fly.io to Kubernetes to AWS Terraform. MIT licensed, zero cost for self-hosting.
beenuar/AiSOC
Source: https://github.com/beenuar/AiSOC
AiSOC
An open-source, self-hostable AI SOC. The agent’s prompts, tool calls, and rationale are logged step-by-step and replayable. MIT-licensed.
License: MIT (https://opensource.org/licenses/MIT)
Public eval harness: CI-gated PRs welcome Version
Live demo (https://tryaisoc.com) · How AiSOC compares · Public eval harness · Deploy in 60 seconds · Deployment options · Architecture · Docs
The demo at tryaisoc.com is a self-hosted instance fronted by a Cloudflare Tunnel — when it’s reachable, the stack is running locally on a maintainer’s box. It can therefore go offline at any time. To run your own (in 3.5 min, with seeded data), see One-shot demo; to expose your own instance on your own domain via Cloudflare Tunnel, see Public demo on your own domain.
GitHub topics (https://github.com/beenuar/AiSOC/topics)
What AiSOC is
AiSOC is a single self-hostable stack that ingests security events, correlates them, runs AI-driven investigation, and surfaces the result in a SOC console. The agent and the substrate are MIT-licensed, so you can read, fork, or replace either of them. Three properties distinguish it from closed-source AI SOC vendors:
- Agent decisions are logged. The Investigation Ledger stores the LLM prompt, the response, the evidence cited, and the downstream tool calls for every step of every run. Replays are available later.
- The substrate has a public eval harness in CI. Five suites gate every PR targeting
main/develop: a 200-incident synthetic dataset drawn from 55 distinct templates drives the MITRE-tactic, investigation-completeness, and response-quality gates (each reporting both a per-case mean and a per-template macro so a single broken template can’t hide behind 199 working duplicates); a separately generated 1,000-alert noisy stream drives the alert-reduction gate; and a schema/coverage gate validatessynthetic_telemetry.jsonl— the companion corpus of ~360 backing events across 14 log sources (Sysmon, Windows Security, M365 audit, Azure sign-in, CloudTrail, Linux auditd, journald, EDR, DNS, web access, Kubernetes audit, GitHub audit, VPN, DB audit) that connector and Sigma PRs can wire against. Alert reduction is a real measurement against the fixed alert stream; the three rubric-based suites are substrate self-consistency gates over deterministic templates. The benchmark page explains exactly which is which. - It runs entirely on your infrastructure. No callbacks to a vendor cloud and no data exfiltration for “model improvement.” The orchestrator is a ~600-line LangGraph in
services/agents/. It is small enough to read end-to-end, swap models in, and patch.
How AiSOC compares
| Capability | AiSOC | Wazuh | Splunk ES | Closed-source AI SOC |
|---|---|---|---|---|
| Open-source license | MIT | GPL-2 | proprietary | proprietary |
| Self-hostable | yes | yes | enterprise-only | cloud-only |
| Autonomous AI investigation | LangGraph | no | partial (Splunk AI) | yes |
| Agent decision audit trail | public Investigation Ledger | n/a | n/a | not published |
| Public substrate eval harness | CI-gated, reproducible, with synthetic telemetry corpus + per-template macros | n/a | n/a | not published |
| Detection content | 800 native + 6,000+ imported (Sigma / Splunk / Chronicle / CAR) | 1,200+ rules | 1,000+ apps | curated |
| Plugin SDK | Python / TypeScript / Go | YAML rules only | apps | proprietary |
| Data residency | your infra | your infra | partial | vendor cloud |
| Pricing | $0 (self-host) | $0 (self-host) | per ingest GB | enterprise |
Closed-source AI SOC vendors ship working products. AiSOC’s contribution is making the agent itself open, the per-step decision trail readable, and the substrate gated by a public eval harness on every PR targeting main / develop.
Deploy in 60 seconds
Four frictionless paths to a running, seeded AiSOC instance with INC-RT-001 (the LockBit 3.0 ransomware showcase) already mid-investigation when you land on it. Each path runs alembic upgrade head and python -m app.scripts.seed_demo as part of its lifecycle, so the seeded data is present without a manual step.
0. One-click installer — zero prerequisites
Don’t have Docker, Node, pnpm, or even git installed? Use the bootstrap installer. It detects your OS, installs everything idempotently, clones the repo, and launches the demo.
# Linux + macOS (one-liner):
curl -fsSL https://raw.githubusercontent.com/beenuar/AiSOC/main/install.sh | bash
# Windows (PowerShell as Administrator):
iwr -useb https://raw.githubusercontent.com/beenuar/AiSOC/main/install.ps1 | iex
The installer covers Ubuntu/Debian (apt), Fedora/RHEL (dnf), Arch (pacman), openSUSE (zypper), Alpine (apk), macOS (brew), and Windows (winget). On Windows it also handles WSL2 enablement for Docker Desktop. Re-running is safe — every step is idempotent. To uninstall later, ./uninstall.sh (Linux/macOS) or .\uninstall.ps1 (Windows). See the Quick install guide for flags, troubleshooting, and what gets installed.
1. Render — one click, hosted
Deploy to Render (https://render.com/deploy?repo=https://github.com/beenuar/AiSOC)
Render reads render.yaml at the repo root, provisions Postgres + Redis, and brings up the demo profile (api, agents, web, realtime). The preDeployCommand migrates and seeds, so the canonical INC-RT-001 is present on first boot. Sleep-on-idle on the hobby tier; flip to standard instances for production. Demo runs deterministic-mode by default — no OpenAI/Anthropic key needed. See infra/render/README.md for cost, scaling, and BYO-LLM details.
2. Docker Compose — one command, local
git clone https://github.com/beenuar/AiSOC.git && cd AiSOC && pnpm aisoc:demo
Pulls prebuilt ghcr.io/beenuar/* images, brings up the slim demo profile (Postgres, Redis, Kafka, api, agents, realtime, web), runs the seeder as a one-shot container, and opens your browser at /cases/INC-RT-001?tab=ledger with [email protected] already auto-logged-in. Idempotent: re-running is a no-op against a seeded volume. Target on a clean Mac with a warm Docker daemon: clone-to-investigation in ~3.5 min warm / ~5 min cold. Stop with pnpm aisoc:demo:down. See One-shot demo for the timing breakdown and what you’ll see on screen.
3. Fly.io — one script, hosted, persistent
git clone https://github.com/beenuar/AiSOC.git && cd AiSOC
./infra/fly/fly-demo-deploy.sh --provision # first run: also creates Postgres + Upstash
./infra/fly/fly-demo-deploy.sh # subsequent runs: deploys updates
Idempotent shell wrapper around flyctl that deploys four apps (api, agents, web, realtime) plus managed Postgres + Upstash Redis, wires the *.internal 6PN DNS between them, runs migrations + seeding as a release_command, and issues TLS certs for your domain. ~$14/mo at idle. Time-to-first-investigation budget: <60s from the click, since the seeder pre-warms a running investigation so the deeplink lands inside the TTFI budget regardless of cold-start. See infra/fly/README.md for DNS prerequisites and per-app sizing.
Production-grade install? Skip the demo paths above and use
infra/helm/(Kubernetes) orinfra/terraform/(AWS). Both bring up the full storage tier — ClickHouse, Kafka, OpenSearch, Neo4j, Qdrant — gated behind compose profiles in the demo paths above.
Deployment options
Each target ships a tested config in infra/:
| Platform | Status | Config | Notes |
|---|---|---|---|
| Fly.io | first-class | infra/fly/ | 4 apps, ~$14/mo. See infra/fly/README.md. |
| Render | supported | render.yaml + infra/render/ | Sleep-on-idle, hobbyist tier. One-click via blueprint button. |
| Railway | supported | infra/railway/railway.toml | PaaS, pay-as-you-go. |
| Coolify | supported | docker-compose.yml | Self-hosted on your own VPS. See infra/coolify/README.md. |
| Kubernetes / Helm | first-class | infra/helm/ | helm install aisoc ./infra/helm/aisoc |
| AWS / Terraform | first-class | infra/terraform/ | cd infra/terraform && terraform apply |
The Render, Railway, and Coolify configs deploy the lean demo profile: api, agents, web, realtime, Postgres, and Redis. ClickHouse, Kafka, OpenSearch, Neo4j, and Qdrant are gated behind compose profiles. For a production-grade install with the full storage tier, use Helm or Terraform.
Use it from Claude, Cursor, or Cody
AiSOC ships an MCP server (https://modelcontextprotocol.io) so analysts can query alerts, run agent investigations, and replay every step the agent took without leaving the IDE or chat.
# Claude Desktop / Cursor / Continue / Cody
npx -y @aisoc/mcp install --host claude \
--aisoc-url https://aisoc.your-company.com \
--api-key aisoc_pat_xxxxxxxxxxxx
The server exposes 11 tools — discovery (aisoc_list_alerts, aisoc_list_cases, aisoc_query_detections), deep-dive (aisoc_get_case, aisoc_get_investigation), and the action/replay set (aisoc_run_investigation, aisoc_replay_decision, aisoc_explain_step) for walking the agent decision ledger step-by-step. Full guide: docs/integrations/mcp. Source: services/mcp/. npm: @aisoc/mcp.
What’s in the box
AiSOC bundles the components a SOC normally pieces together from separate vendors:
- Connect data sources in three clicks — a 50-connector click-and-connect catalog spans EDR/XDR (CrowdStrike Falcon, SentinelOne, Microsoft Defender XDR, Palo Alto Cortex XDR, Cortex XSIAM, VMware Carbon Black, Trellix Helix, Trend Vision One), SIEM (Splunk, Microsoft Sentinel, Elastic, Sumo Logic, Datadog Cloud SIEM, Google Chronicle, Rapid7 InsightIDR), cloud + CNAPP (AWS Security Hub, AWS GuardDuty, AWS CloudTrail, AWS VPC Flow Logs, Azure Activity, Azure Defender, GCP Cloud Audit, GCP SCC, Wiz, Lacework, Tenable, Prisma Cloud, Orca), identity (Okta, Microsoft Entra, Auth0, Duo Security, 1Password), SaaS (Microsoft 365 audit, Google Workspace, Cloudflare, Proofpoint, Mimecast, ServiceNow, Jira, Slack audit, Salesforce, Email inbox), VCS (GitHub, Snyk), endpoint fleet (osctrl, FleetDM for fleet-wide osquery), container + orchestration (Kubernetes audit logs via apiserver webhook or
audit.logtail), and network (Tailscale, Zscaler, Cisco Umbrella). Each connector renders a schema-driven form, runs a liveTest connectionround-trip before save, encrypts every secret with the application-layerCredentialVault(Fernet AES-128-CBC + HMAC-SHA256), and starts polling on a per-instance schedule. Walkthrough: docs/connectors. Threat model + key rotation: docs/operations/credentials. - Own your endpoint telemetry — first-party
aisoc-osquery-tlsFastAPI service (services/osquery-tls/) andaisoc-directlightweight agent connector ship a self-hosted osquery TLS plugin, FleetDM-compatible config/log endpoints, and a direct-from-agent ingest path that bypasses third-party SaaS. Built-in file integrity monitoring (FIM) endpoint (services/osquery-tls/app/api/v1/endpoints/fim.py) ingestsfile_eventsand synthesizes alerts on writes to/etc/passwd,/etc/shadow, sshd configs, sudoers, and Windows registry hives; bundled osquery packs cover incident response, OSquery-ATT&CK, and FIM out of the box. 16 native osquery detections (detections/endpoint/osquery-*.yaml, IDsdet-endpoint-281..296) cover credential access, persistence, lateral movement, defense evasion, and discovery — paired with positive/negative test fixtures (detections/fixtures/osquery_*.json) and CI-gated against the Detection Validation workflow. Live-query playbook step (osquery_live_query) lets responders push allowlisted distributed queries to single hosts or fleet-wide via osctrl/FleetDM with HMAC-signed ChatOps approval. 5 custom Go-based virtual tables (services/osquery-extensions/) extend the agent withaisoc_browser_extensions,aisoc_kernel_modules,aisoc_attck_persistence,aisoc_pending_actions, andaisoc_alert_cachefor richer endpoint visibility and bidirectional response. Walkthroughs: docs/connectors/osctrl, docs/connectors/fleetdm. - Ingest events from any connector into a Kafka spine.
- Correlate them in real time with deduplication, ML scoring, per-alert confidence scoring, and Sigma/YARA detection.
- Roll up signal onto entities — Risk-Based Alerting accumulates time-decayed risk points on the user, host, IP, and domain each alert touches, promotes them to entity-incidents at a tunable threshold, and surfaces an entity-centric queue in the alerts UI. Hits the published 2026 KPI bar of ≥ 50:1 alert-to-incident ratio (CI-gated in
services/fusion/tests/test_entity_risk.py). - Search across SIEMs — Federated Search fans out a single query to connected Splunk, Microsoft Sentinel, and Elastic instances, translating the query into each target’s native dialect (SPL, KQL, ES|QL) via pluggable translators in
services/connectors/app/federated/. - Manage detections as code — Detection-as-Code (DAC) provides a propose → review → eval-gate → promote lifecycle for detection rules. Every proposal carries an eval result from the harness; candidates that regress MITRE accuracy cannot be promoted. Endpoints in
services/api/app/api/v1/endpoints/detection_proposals.py. - Run hypothesis-driven hunts on a schedule — Hunt-as-Code YAML definitions in
hunts/declare a hypothesis, MITRE ATT&CK tags, log sources, indicators, and a cron schedule. The hunt engine inservices/agents/app/hunt/loads the corpus at startup, runs hunts on their schedule, and stores findings in the DB. - Track detection drift — the Purple Team service takes ATT&CK coverage snapshots and diffs them over time, so you can see which techniques gained or lost coverage between releases. Implementation in
services/purple-team/app/services/drift.py. - Verify ChatOps actions — HMAC-signed approval prompts are sent to Slack or Teams before high-impact SOAR actions execute, with a time-limited verification token. Implementation in
services/actions/app/executors/chatops.py. - Benchmark against adversary LLMs — a deterministic attacker-LLM mutator generates adversary incidents to test detection resilience. Script:
scripts/generate_adversary_incidents.py; eval:services/agents/tests/test_adversary_eval.py. - Enrich every signal with threat intelligence from TAXII 2.1, MISP, OTX, and CISA KEV.
- Reason about attacks via a LangGraph multi-agent system grounded in MITRE ATT&CK.
- **Detect devi
Similar Articles
@yaojingang: Open-sourced a website scanning skill: yao-websecurity-skill. I've learned that at least three public companies have deployed GEOFlow, and many friends have done various secondary developments based on this system, including commercial SaaS versions. Its security issues need to be taken seriously. Additionally, more and more...
Open-sourced yao-websecurity-skill, an AI-based website security audit skill. It includes 275 security checks, supports static and dynamic audit modes, and automatically generates security scoring reports to help developers discover and fix security risks.
@bozhou_ai: There's a pain point when using AI to write projects: frontend is too easy, backend is too hard. You can generate a page by just saying a sentence, but once you need to create a database, set up user login, file storage, or manage permissions, ordinary people get stuck right away. InsForge is here to fill this gap. YC S26 incubated, a platform specifically for AI programming agents to handle the backend, Apache…
InsForge is an open-source backend platform designed specifically for AI programming agents, providing databases, user authentication, file storage, edge computing, AI gateway, and one-click deployment, enabling AI to independently develop full-stack applications.
@VincentLogic: Still running AI tools by opening multiple terminal windows? That’s torture. I found an open-source, free 'Agent Operating System'— AionUi. Previously, using Claude Code, OpenClaw, Hermes meant running each independently; now, it can host top-tier agents like Gemini, Claude Code, Codex, and Qwen Code in a single interface, just like Windows hosts apps. G…
AionUi is an open-source, free AI agent operating system designed to centrally manage top AI models like Claude Code and Gemini. It supports multi-agent collaboration, centralized data management, and remote task automation.
@mylifcc: The ultimate AI security red teaming tool is here! I just discovered an incredibly hardcore open-source project — DeepTeam! Produced by Confident AI, it is an LLM Red Teaming framework built on DeepEval, specifically designed to 'hack' your own large models: 50+ real-world vulnerabilities…
Confident AI has released DeepTeam, an open-source LLM red teaming framework that supports 50+ vulnerability detections and 20+ adversarial attacks, aimed at helping developers safely test large language models.
@geekbb: No more digging through terminal tabs when running multiple AI coding agents simultaneously. The kanban board gives you an instant overview of who is working, who is waiting, and who is done. https://github.com/lanes-sh/app
Lanes is a native macOS desktop application that serves as a mission control for managing multiple AI coding agents, featuring an issue board, live embedded terminals, and Git integration to streamline developer workflows.