SkVM: Compiling Skills for Efficient Execution Everywhere
Abstract
SkVM is a compilation and runtime system that enables portable and efficient execution of LLM skills across different models and platforms by treating skills as code and analyzing capability requirements.
LLM agents increasingly adopt skills as a reusable unit of composition. While skills are shared across diverse agent platforms, current systems treat them as raw context, causing the same skill to behave inconsistently for different agents. This fragility undermines skill portability and execution efficiency. To address this challenge, we analyze 118,000 skills and draw inspiration from traditional compiler design. We treat skills as code and LLMs as heterogeneous processors. To make portability actionable, we decompose a skill's requirements into a set of primitive capabilities, and measure how well each model-harness pair supports them. Based on these capability profiles, we propose SkVM, a compilation and runtime system designed for portable and efficient skill execution. At compile time, SkVM performs capability-based compilation, environment binding, and concurrency extraction. At runtime, SkVM applies JIT code solidification and adaptive recompilation for performance optimization. We evaluate SkVM across eight LLMs of varying scales and three agent harnesses, covering SkillsBench and representative skill tasks. Results demonstrate that SkVM significantly improves task completion rates across different models and environments while reducing token consumption by up to 40%. In terms of performance, SkVM achieves up to 3.2x speedup with enhanced parallelism, and 19-50x latency reduction through code solidification.
Community
With the rise of frameworks like openClaw and Hermes Agents, Artificial Intelligence is evolving from simple "chatbots" into "digital employees" and partners capable of executing real-world tasks. At the heart of this transformation are Skills—the essential knowledge packages that empower agents to complete complex workflows. However, the execution effectiveness of these Skills varies significantly across different models and Agent Harnesses; in some cases, utilizing Skills can even lead to a performance degradation in certain models.
To address these challenges, the IPADS research team from Shanghai Jiao Tong University has introduced SkVM: A Skill-oriented Language Virtual Machine. In the era of AI Agents, Skills are the code, while different LLMs represent heterogeneous processors. Drawing inspiration from the architecture of classical language virtual machines (like the JVM), the team has designed the first-ever native virtual machine for Skills. SkVM enables a "write once, run anywhere" paradigm for Skills across arbitrary models and Agent Harnesses. Skills compiled via SkVM allow smaller models (e.g., 30B) to achieve accuracy comparable to GPT-4.6-Opus, while simultaneously reducing token consumption by 40% and delivering up to a 50x increase in execution speed.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SkillCraft: Can LLM Agents Learn to Use Tools Skillfully? (2026)
- Act While Thinking: Accelerating LLM Agents via Pattern-Aware Speculative Tool Execution (2026)
- SoK: Agentic Skills - Beyond Tool Use in LLM Agents (2026)
- An Agentic Evaluation Framework for AI-Generated Scientific Code in PETSc (2026)
- KAIJU: An Executive Kernel for Intent-Gated Execution of LLM Agents (2026)
- SkillTrojan: Backdoor Attacks on Skill-Based Agent Systems (2026)
- Coverage-Guided Multi-Agent Harness Generation for Java Library Fuzzing (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.03088 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper