Aguvis: Unified Pure Vision Agents for Autonomous GUI Interaction
Abstract
Graphical User Interfaces (GUIs) are critical to human-computer interaction, yet automating GUI tasks remains challenging due to the complexity and variability of visual environments. Existing approaches often rely on textual representations of GUIs, which introduce limitations in generalization, efficiency, and scalability. In this paper, we introduce Aguvis, a unified pure vision-based framework for autonomous GUI agents that operates across various platforms. Our approach leverages image-based observations, and grounding instructions in natural language to visual elements, and employs a consistent action space to ensure cross-platform generalization. To address the limitations of previous work, we integrate explicit planning and reasoning within the model, enhancing its ability to autonomously navigate and interact with complex digital environments. We construct a large-scale dataset of GUI agent trajectories, incorporating multimodal reasoning and grounding, and employ a two-stage training pipeline that first focuses on general GUI grounding, followed by planning and reasoning. Through comprehensive experiments, we demonstrate that Aguvis surpasses previous state-of-the-art methods in both offline and real-world online scenarios, achieving, to our knowledge, the first fully autonomous pure vision GUI agent capable of performing tasks independently without collaboration with external closed-source models. We open-sourced all datasets, models, and training recipes to facilitate future research at https://aguvis-project.github.io/.
Community
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Ponder&Press: Advancing Visual GUI Agent towards General Computer Control (2024)
- Large Language Model-Brained GUI Agents: A Survey (2024)
- Improved GUI Grounding via Iterative Narrowing (2024)
- EDGE: Enhanced Grounded GUI Understanding with Enriched Multi-Granularity Synthetic Data (2024)
- ShowUI: One Vision-Language-Action Model for GUI Visual Agent (2024)
- AutoGLM: Autonomous Foundation Agents for GUIs (2024)
- Visual Contexts Clarify Ambiguous Expressions: A Benchmark Dataset (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Hi,
Thank you for the great work !
Are the models and source code available on HF / Github?
Currently, I can't find them.
您好,谢谢您的伟大的工作! 模型和源代码是否在HF / Github上可用? 目前,我找不到它们. 感觉你们的工作还是太赶了。。。想占坑
Models citing this paper 0
No model linking this paper
Datasets citing this paper 2
Spaces citing this paper 0
No Space linking this paper