Papers
arxiv:2410.02426

Learning the Latent Rules of a Game from Data: A Chess Story

Published on Oct 3
· Submitted by BFauber on Oct 4
Authors:

Abstract

We demonstrate that small pretrained foundational generative language models with millions of parameters can learn the latent rules of a process from data associated with the process. Inspired by Stefan Zweig's novella "Schachnovelle," also known as "The Royal Game" in English, we show that 28M and 125M parameter pretrained foundational small language models (SLMs) can be instruction fine-tuned with 1,000-to-1,000,000 examples to learn the rules of chess, propose legal moves, and accurately solve chess problems. We also explore the impact of successive language model fine-tuning epochs on improved outcomes and demonstrate reductions in model hallucinations by increasing the number of instruction fine-tuning examples.

Community

Paper author Paper submitter

Instruction fine-tuned small language models (SLMs) with 28M and 125M parameters can learn chess rules and solve chess problems using only single-move game play data. Additionally, increasing the number of fine-tuning examples reduces model hallucinations and enhances performance, highlighting the importance of relevant data in customizing GenAI language model pipelines.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.02426 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.02426 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.02426 in a Space README.md to link it from this page.

Collections including this paper 2