Ritwik Mishra's picture
2 1

Ritwik Mishra

ritwikm
Ā·

AI & ML interests

None yet

Recent Activity

replied to mmhamdy's post about 2 months ago
šŸ’” Thinking Tokens For Language Models! How much is 56 times 37? Can you answer that right away? In a short paper, David Herel and Tomas Mikolov propose a simple method to improve the reasoning of language models when performing complex calculations. šŸ“Œ They note that, although language models are not that good with difficult calculations, humans also cannot perform these calculations immediately and require a considerable amount of time to come up with an answer. Inspired by this, they introduce šŸ’”Thinking TokensšŸ’” So what are those "thinking tokens"?! Nothing fancy, they are just special tokens '<T>' that you insert after each word in a sentence whenever a complex problem is encountered. That's it! šŸ‘‰ The main idea is to "buy" the model "some time" to think about the problem with these additional computations before answering. Using this method they observed an improved (a little bit) perplexity. šŸ‘‰ Before getting excited note that: They have added these tokens manually, and they have used an RNN language model. From the paper: "As a proof of concept, we have added N ā€™thinking tokensā€™ (< T >) after each observed word in a dataset. Our vision is that this basic concept can be extended to a self-adjusting model, which will be able to decide itself if and how many ā€™thinking tokensā€™ will be used for a specific problem, where N could also vary throughout the sentence. This would allow us to reduce the computational time, which would not increase N times."
View all activity

Organizations

Hugging Face Discord Community's profile picture

ritwikm's activity

replied to mmhamdy's post about 2 months ago
view reply

The paper judges the effectiveness of this approach only through perplexity. The concept of perplexity is basically, "how perplexed (surprised) your language model is when predicting a token". If a language model generates words at random then perplexity will be very high. However, if the LM is confident about a small set of words to be generated then perplexity will be low. So adding a predefined fixed token after each token will obviously make the LM more confident about the next word. So obviously perplexity will be low. Isn't it?

New activity in ritwikm/gandhi-gpt over 1 year ago