Papers
arxiv:2412.06769

Training Large Language Models to Reason in a Continuous Latent Space

Published on Dec 9
· Submitted by Shibo-UCSD on Dec 10
#3 Paper of the day
Authors:
,
,

Abstract

Large language models (LLMs) are restricted to reason in the "language space", where they typically express the reasoning process with a chain-of-thought (CoT) to solve a complex reasoning problem. However, we argue that language space may not always be optimal for reasoning. For example, most word tokens are primarily for textual coherence and not essential for reasoning, while some critical tokens require complex planning and pose huge challenges to LLMs. To explore the potential of LLM reasoning in an unrestricted latent space instead of using natural language, we introduce a new paradigm Coconut (Chain of Continuous Thought). We utilize the last hidden state of the LLM as a representation of the reasoning state (termed "continuous thought"). Rather than decoding this into a word token, we feed it back to the LLM as the subsequent input embedding directly in the continuous space. Experiments show that Coconut can effectively augment the LLM on several reasoning tasks. This novel latent reasoning paradigm leads to emergent advanced reasoning patterns: the continuous thought can encode multiple alternative next reasoning steps, allowing the model to perform a breadth-first search (BFS) to solve the problem, rather than prematurely committing to a single deterministic path like CoT. Coconut outperforms CoT in certain logical reasoning tasks that require substantial backtracking during planning, with fewer thinking tokens during inference. These findings demonstrate the promise of latent reasoning and offer valuable insights for future research.

Community

Paper author Paper submitter
edited 1 day ago

Coconut (Chain of Continuous Thought)

A twitter (X) thread for quick introduction: https://x.com/Ber18791531/status/1866561188664087017

besides the obvious advantage of reasoning in latent space in terms of efficiency, i think it's extremely risky and dangerous for advanced models.
Compared to "normal" reasoning in non-latent tokens, it's very hard to impossible to accurately see what the LLM is thinking or reasoning internally. You could make an autoencoder or something like that for the latent reasoning tensors, but how dependable and accurate that can be is questionable.
For example, in the test that was recently done with the new openai-o1 model, where it tried replacing a different model with itself in order to fulfill it's goal, that action was still alligned with the instruction of fulfilling it's goal but may have been not intended by humans. Stuff like that may only be reliably noticed if the reasoning happens non-latent i think

·

I thought this method was for exploring the "raw" method of reasoning rather than forcing the models to formalize their thinking process through discrete tokens. Of course, it's likely harder to interpret, but there are advantages to a more efficient process that is potentially unbounded by discrete token space. It's a trade-off between efficiency and interpretability in my opinion. CMIIW

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2412.06769 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2412.06769 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2412.06769 in a Space README.md to link it from this page.

Collections including this paper 19