Spaces:
No application file
No application file
A newer version of the Streamlit SDK is available:
1.40.0
metadata
title: README
emoji: π»
colorFrom: green
colorTo: red
sdk: streamlit
pinned: false
HumanEval-V: A Lightweight Visual Understanding and Reasoning Benchmark for Evaluating LMMs through Coding Tasks
π Paper β’ π Home Page β’ π» GitHub Repository β’ π Leaderboard β’ π€ Dataset β’ π€ Dataset Viewer
HumanEval-V is a novel and lightweight benchmark designed to evaluate the visual understanding and reasoning capabilities of Large Multimodal Models (LMMs) through coding tasks. The dataset comprises 108 entry-level Python programming challenges, adapted from platforms like CodeForces and Stack Overflow. Each task includes visual context that is indispensable to the problem, requiring models to perceive, reason, and generate Python code solutions accordingly.
Key features:
- Visual coding tasks that require understanding images to solve.
- Entry-level difficulty, making it ideal for assessing the baseline performance of foundational LMMs.
- Handcrafted test cases for evaluating code correctness through an execution-based metric pass@k.