SkCC: Portable and Secure Skill Compilation for Cross-Framework LLM Agents

Hugging Face Daily Papers Papers

Summary

SkCC is a compilation framework that uses a strongly-typed intermediate representation to enable portable deployment of agent skills across different frameworks while enforcing security, significantly improving performance and reducing maintenance.

LLM-Agents have evolved into autonomous systems for complex task execution, with the SKILL.md specification emerging as a de facto standard for encapsulating agent capabilities. However, a critical bottleneck remains: different agent frameworks exhibit starkly different sensitivities to prompt formatting, causing up to 40% performance variation, yet nearly all skills exist as a single, format-agnostic Markdown version. Manual per-platform rewriting creates an unsustainable maintenance burden, while prior audits have found that over one third of community skills contain security vulnerabilities. To address this, we present SkCC, a compilation framework that introduces classical compiler design into agent skill development. At its core, SkIR - a strongly-typed intermediate representation - decouples skill semantics from platform-specific formatting, enabling portable deployment across heterogeneous agent frameworks. Around this IR, a compile-time Analyzer enforces security constraints via Anti-Skill Injection before deployment. Through a four-phase pipeline, SkCC reduces adaptation complexity from O(m times n) to O(m + n). Experiments on SkillsBench demonstrate that compiled skills consistently outperform their original counterparts, improving pass rates from 21.1% to 33.3% on Claude Code and from 35.1% to 48.7% on Kimi CLI, while achieving sub-10ms compilation latency, a 94.8% proactive security trigger rate, and 10-46% runtime token savings across platforms.
Original Article
View Cached Full Text

Cached at: 05/11/26, 02:53 PM

Paper page - SkCC: Portable and Secure Skill Compilation for Cross-Framework LLM Agents

Source: https://huggingface.co/papers/2605.03353

Abstract

SkCC is a compilation framework that uses a strongly-typed intermediate representation to enable portable deployment of agent skills across different platforms while ensuring security and improving performance.

LLM-Agentshave evolved into autonomous systems for complex task execution, with theSKILL.mdspecification emerging as a de facto standard for encapsulating agent capabilities. However, a critical bottleneck remains: differentagent frameworksexhibit starkly different sensitivities toprompt formatting, causing up to 40% performance variation, yet nearly all skills exist as a single, format-agnostic Markdown version. Manual per-platform rewriting creates an unsustainable maintenance burden, while prior audits have found that over one third of community skills containsecurity vulnerabilities. To address this, we presentSkCC, a compilation framework that introduces classical compiler design into agent skill development. At its core,SkIR- a strongly-typedintermediate representation- decouples skill semantics from platform-specific formatting, enabling portable deployment across heterogeneousagent frameworks. Around this IR, acompile-time Analyzerenforces security constraints viaAnti-Skill Injectionbefore deployment. Through a four-phase pipeline,SkCCreduces adaptation complexity from O(m times n) to O(m + n). Experiments onSkillsBenchdemonstrate that compiled skills consistently outperform their original counterparts, improving pass rates from 21.1% to 33.3% on Claude Code and from 35.1% to 48.7% on Kimi CLI, while achieving sub-10ms compilation latency, a 94.8% proactive security trigger rate, and 10-46%runtime token savingsacross platforms.

View arXiv pageView PDFProject pageGitHub4Add to collection

Get this paper in your agent:

hf papers read 2605\.03353

Don’t have the latest CLI?curl \-LsSf https://hf\.co/cli/install\.sh \| bash

Models citing this paper0

No model linking this paper

Cite arxiv.org/abs/2605.03353 in a model README.md to link it from this page.

Datasets citing this paper0

No dataset linking this paper

Cite arxiv.org/abs/2605.03353 in a dataset README.md to link it from this page.

Spaces citing this paper0

No Space linking this paper

Cite arxiv.org/abs/2605.03353 in a Space README.md to link it from this page.

Collections including this paper1

Similar Articles

SkillOS: Learning Skill Curation for Self-Evolving Agents

Hugging Face Daily Papers

This paper introduces SkillOS, a reinforcement learning framework that enables LLM agents to learn long-term skill curation policies for self-evolution, improving performance and generalization across tasks.

SkillGen: Verified Inference-Time Agent Skill Synthesis

arXiv cs.LG

This article introduces SkillGen, a multi-agent framework that synthesizes and verifies reusable inference-time skills for LLM agents by contrasting successful and failed trajectories. The method ensures skills are auditable and empirically verified for their net positive impact on agent performance.