Low-Bit Quantization Favors Undertrained LLMs: Scaling Laws for Quantized LLMs with 100T Training Tokens
Abstract
We reveal that low-bit quantization favors undertrained large language models (LLMs) by observing that models with larger sizes or fewer training tokens experience less quantization-induced degradation (QiD) when applying low-bit quantization, whereas smaller models with extensive training tokens suffer significant QiD. To gain deeper insights into this trend, we study over 1500 quantized LLM checkpoints of various sizes and at different training levels (undertrained or fully trained) in a controlled setting, deriving scaling laws for understanding the relationship between QiD and factors such as the number of training tokens, model size and bit width. With the derived scaling laws, we propose a novel perspective that we can use QiD to measure an LLM's training levels and determine the number of training tokens required for fully training LLMs of various sizes. Moreover, we use the scaling laws to predict the quantization performance of different-sized LLMs trained with 100 trillion tokens. Our projection shows that the low-bit quantization performance of future models, which are expected to be trained with over 100 trillion tokens, may NOT be desirable. This poses a potential challenge for low-bit quantization in the future and highlights the need for awareness of a model's training level when evaluating low-bit quantization research. To facilitate future research on this problem, we release all the 1500+ quantized checkpoints used in this work at https://huggingface.co/Xu-Ouyang.
Community
Takeaways:
- We found that low-bit quantization favors undertrained LLMs that are either large or trained with a small number of tokens. For fully trained LLMs, it will cause severe quantization-induced degradation (QiD) (Figure 2).
- We derive scaling laws to predict QiD when low-bit quantization is applied to a given LLM based on its model size, training tokens, and bit width (Section 3.5).
- We use QiD to determine whether an LLM is fully trained, estimating using the derived scaling laws that a 70B model requires 17 trillion tokens to be relatively fully trained, while a 405B model needs nearly 50 trillion tokens.
- We use our derived scaling laws to predict the quantization-induced degradation for 7B, 70B, and 405B models trained with 100 trillion tokens when applying low-bit quantization.
Nice work!
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Scaling Laws for Mixed quantization in Large Language Models (2024)
- Scaling Laws for Precision (2024)
- Scaling Optimal LR Across Token Horizons (2024)
- Scaling laws for post-training quantized large language models (2024)
- GWQ: Gradient-Aware Weight Quantization for Large Language Models (2024)
- Channel-Wise Mixed-Precision Quantization for Large Language Models (2024)
- CrossQuant: A Post-Training Quantization Method with Smaller Quantization Kernel for Precise Large Language Model Compression (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper