CharXiv: Charting Gaps in Realistic Chart Understanding in Multimodal LLMs
Abstract
Chart understanding plays a pivotal role when applying Multimodal Large Language Models (MLLMs) to real-world tasks such as analyzing scientific papers or financial reports. However, existing datasets often focus on oversimplified and homogeneous charts with template-based questions, leading to an over-optimistic measure of progress. We demonstrate that although open-source models can appear to outperform strong proprietary models on these benchmarks, a simple stress test with slightly different charts or questions can deteriorate performance by up to 34.5%. In this work, we propose CharXiv, a comprehensive evaluation suite involving 2,323 natural, challenging, and diverse charts from arXiv papers. CharXiv includes two types of questions: 1) descriptive questions about examining basic chart elements and 2) reasoning questions that require synthesizing information across complex visual elements in the chart. To ensure quality, all charts and questions are handpicked, curated, and verified by human experts. Our results reveal a substantial, previously underestimated gap between the reasoning skills of the strongest proprietary model (i.e., GPT-4o), which achieves 47.1% accuracy, and the strongest open-source model (i.e., InternVL Chat V1.5), which achieves 29.2%. All models lag far behind human performance of 80.5%, underscoring weaknesses in the chart understanding capabilities of existing MLLMs. We hope CharXiv facilitates future research on MLLM chart understanding by providing a more realistic and faithful measure of progress. Project page and leaderboard: https://charxiv.github.io/
Community
๐คจ Are Multimodal Large Language Models really as ๐ ๐จ๐จ๐ at ๐๐ก๐๐ซ๐ญ ๐ฎ๐ง๐๐๐ซ๐ฌ๐ญ๐๐ง๐๐ข๐ง๐ as existing benchmarks such as ChartQA suggest?
๐ซ Our โ๐๐๐ฃ๐๐๐ง benchmark suggests NO!
๐ฅHumans achieve โจ๐๐+% correctness.
๐ฅSonnet 3.5 outperforms GPT-4o by 10+ points, reaching ๐๐๐% correctness.
๐ฅOpen-weight models are capped at โญ๐๐% correctness.
๐ช Leaderboard: https://charxiv.github.io/#leaderboard
๐ Preprint: https://arxiv.org/abs/2406.18521
๐ Charxiv is โจ๐๐๐% handcrafted with rigorous human validation, and it reveals substantial gaps among Multimodal Large Language Models and humans in chart understanding.
Kudos @zwcolin and team. I've featured this paper in my AI research newsletter www.aitidbits.ai/p/july-4th-2024#:~:text=of%20human%20performance-,Princeton,-develops
Looking forward to more novel papers and methods.
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper