Papers
arxiv:2407.01284

We-Math: Does Your Large Multimodal Model Achieve Human-like Mathematical Reasoning?

Published on Jul 1
· Submitted by dongguanting on Jul 2
#1 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,
,
,

Abstract

Visual mathematical reasoning, as a fundamental visual reasoning ability, has received widespread attention from the Large Multimodal Models (LMMs) community. Existing benchmarks, such as MathVista and MathVerse, focus more on the result-oriented performance but neglect the underlying principles in knowledge acquisition and generalization. Inspired by human-like mathematical reasoning, we introduce WE-MATH, the first benchmark specifically designed to explore the problem-solving principles beyond end-to-end performance. We meticulously collect and categorize 6.5K visual math problems, spanning 67 hierarchical knowledge concepts and five layers of knowledge granularity. We decompose composite problems into sub-problems according to the required knowledge concepts and introduce a novel four-dimensional metric, namely Insufficient Knowledge (IK), Inadequate Generalization (IG), Complete Mastery (CM), and Rote Memorization (RM), to hierarchically assess inherent issues in LMMs' reasoning process. With WE-MATH, we conduct a thorough evaluation of existing LMMs in visual mathematical reasoning and reveal a negative correlation between solving steps and problem-specific performance. We confirm the IK issue of LMMs can be effectively improved via knowledge augmentation strategies. More notably, the primary challenge of GPT-4o has significantly transitioned from IK to IG, establishing it as the first LMM advancing towards the knowledge generalization stage. In contrast, other LMMs exhibit a marked inclination towards Rote Memorization - they correctly solve composite problems involving multiple knowledge concepts yet fail to answer sub-problems. We anticipate that WE-MATH will open new pathways for advancements in visual mathematical reasoning for LMMs. The WE-MATH data and evaluation code are available at https://github.com/We-Math/We-Math.

Community

Paper author Paper submitter
edited Jul 2

Visual mathematical reasoning, as a fundamental visual reasoning ability, has received widespread attention from the Large Multimodal Models (LMMs) community. Existing benchmarks focus more on the result-oriented performance, but neglecting the underlying principles in knowledge acquisition and generalization.

image.png

Inspired by human-like mathematical reasoning, we introduce WE-MATH, the first benchmark specifically designed to explore the problem-solving principles beyond the end-to-end performance. As shown in upon figure, we meticulously collect and categorize 6.5K visual math problems, spanning 67 hierarchical knowledge concepts and 5 layers of knowledge granularity.

image.png

We firstly decompose composite problems into sub-problems according to the required knowledge concepts and introduce a novel four-dimensional metric, namely Insufficient Knowledge (IK), Inadequate Generalization (IG), Complete Mastery (CM), and Rote Memorization (RM) to hierarchically assess inherent issues in LMMs' reasoning process (as shown in upon figure).

image.png

With WE-MATH, we conduct a thorough evaluation of existing LMMs in visual mathematical reasoning and reveal a negative correlation between solving step and problem-specific performance. We confirm the IK issue of LMMs can be effectively improved via knowledge augmentation strategy. More notably, the primary challenge of GPT-4o has significantly transitioned from IK to IG, establishing it as the first LMM advancing towards the knowledge generalization stage. In contrast, other LMMs exhibit a marked inclination towards Rote Memorization -- they correctly solve composite problems involving multiple knowledge concepts, yet fail in answering sub-problems. We anticipate that WE-MATH will open new pathways for advancements in visual mathematical reasoning for LMMs.

Paper author Paper submitter
edited Jul 2
Paper author Paper submitter
edited Jul 2

Detailed Results:

  1. Main Results

image.png

  1. Knowledge based Reasoning Analysis

image.png

  1. Quantitative Analysis

image.png

Hi @dongguanting congrats on this work!

I see your dataset is currently hosted here: https://github.com/We-Math/We-Math/tree/main/data, would you be up for pushing it to the hub and linking it to this paper?

See here for a guide: https://huggingface.co/docs/datasets/loading#json.

Let me know if you need any help.

Kind regards,

Niels

·
Paper author

Thanks, we will push to the hub.

No description provided.
Paper author Paper submitter

Our dataset link on huggingface dataset: https://huggingface.co/datasets/We-Math/We-Math

·

Thanks, would it be possible to link it to the paper? https://huggingface.co/datasets/We-Math/We-Math/discussions/1

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2407.01284 in a model README.md to link it from this page.

Datasets citing this paper 1

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2407.01284 in a Space README.md to link it from this page.

Collections including this paper 15