Abstract
Computer-use agents often rely on expensive multimodal models for every interaction, but a more efficient approach uses lightweight policies with risk detection monitors to escalate to stronger models only when needed.
Computer-use agents provide a promising path toward general software automation because they can interact directly with arbitrary graphical user interfaces instead of relying on brittle, application-specific integrations. Despite recent advances in benchmark performance, strong computer-use agents remain expensive and slow in practice, since most systems invoke large multimodal models at nearly every interaction step. We argue that this uniform allocation of compute is fundamentally inefficient for long-horizon GUI tasks. Such trajectories are highly heterogeneous: many steps are routine and can be handled reliably by smaller, cheaper policies, while errors tend to concentrate at a relatively small number of high-risk moments. Across computer-use benchmarks, these failures repeatedly take two forms: progress stalls, where the agent loops, repeats ineffective actions, or fails to make meaningful progress, and silent semantic drift, where the agent continues taking locally plausible actions after already deviating from the user's true goal. To address this inefficiency, we propose an event-driven, step-level cascade for computer-use agents that runs a small policy by default and escalates to a stronger model only when lightweight learned monitors detect elevated risk. Our framework combines two complementary signals: a Stuck Monitor that detects degraded progress from recent reasoning-action history and triggers recovery, and a Milestone Monitor that identifies semantically meaningful checkpoints where sparse verification is most informative for catching drift. This design turns always-on frontier-model inference into adaptive, on-demand compute allocation over the course of an evolving interaction. The framework is modular and deployment-oriented: it can be layered on top of existing computer-use agents without changing the underlying agent architecture or retraining the large model.
Community
We introduce an event-driven, step-level cascade for efficient computer-use agents. Instead of invoking a frontier VLM at every GUI step, our framework runs a smaller policy by default and escalates only when lightweight monitors detect progress stalls or milestone checkpoints that require verification. Across OSWorld and WebArena, this adaptive routing recovers much of the performance of always-large agents while substantially reducing large-model usage, latency, and cost.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- OSExpert: Computer-Use Agents Learning Professional Skills via Exploration (2026)
- AgentCollab: A Self-Evaluation-Driven Collaboration Paradigm for Efficient LLM Agents (2026)
- Ares: Adaptive Reasoning Effort Selection for Efficient LLM Agents (2026)
- Adaptive Vision-Language Model Routing for Computer Use Agents (2026)
- Anticipatory Planning for Multimodal AI Agents (2026)
- AgentHazard: A Benchmark for Evaluating Harmful Behavior in Computer-Use Agents (2026)
- Terminal Agents Suffice for Enterprise Automation (2026)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend
Get this paper in your agent:
hf papers read 2604.27151 Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper