Abstract
We study the depth of grade-school math (GSM) problem-solving capabilities of LLMs. To this end, we evaluate their performance on pairs of existing math word problems together so that the answer to the second problem depends on correctly answering the first problem. Our findings reveal a significant reasoning gap in most LLMs, that is performance difference between solving the compositional pairs and solving each question independently. This gap is more pronounced in smaller, more cost-efficient, and math-specialized models. Moreover, instruction-tuning recipes and code generation have varying effects across LLM sizes, while finetuning on GSM can lead to task overfitting. Our analysis indicates that large reasoning gaps are not because of test-set leakage, but due to distraction from additional context and poor second-hop reasoning. Overall, LLMs exhibit systematic differences in their reasoning abilities, despite what their performance on standard benchmarks indicates.
Community
If your model can solve a single elementary school math problem with 0.9 win rate, then its win rate to solve two problems (one problem per prompt) is 0.9*0.9 = 0.81.
But if you include two problems in one prompt, and make the second problem depends on answer to the first one, the win rate is lower than 0.81.
The smaller the model, the bigger the gap
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Mutual Reasoning Makes Smaller LLMs Stronger Problem-Solvers (2024)
- LLaMa-SciQ: An Educational Chatbot for Answering Science MCQ (2024)
- InfinityMATH: A Scalable Instruction Tuning Dataset in Programmatic Mathematical Reasoning (2024)
- Smaller, Weaker, Yet Better: Training LLM Reasoners via Compute-Optimal Sampling (2024)
- MMLU-Pro+: Evaluating Higher-Order Reasoning and Shortcut Learning in LLMs (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 1
Spaces citing this paper 0
No Space linking this paper