float16 or bfloat16?

#1
by SeanScripts - opened

Currently this model quantization is using float16. I ran a quick check on the error introduced by quantization of each tensor, and found that the average error per parameter is an order of magnitude higher with bfloat16 compared to float16. I also checked the error of the log base 2 of the parameters, since bfloat16 should be able to cover a wider range of magnitudes. However, the average error in the log of the parameters was also about an order of magnitude higher for bfloat16 compared to float16.

Calculated over all but the embedding and final out tensors, which I couldn't fit on my total available memory for this computation (probably could have with more efficiency but this should be a good start):
Average fp16 error: 3.756e-05
Average fp16 error in the log2 of the params: 0.000934
Average bf16 error: 0.000268
Average bf16 error in the log2 of the params: 0.00720

I don't have the resources to try to measure the perplexity or benchmarks, which would probably be preferable. Benchmarks would be especially useful for seeing how much performance has been lost compared to the non-quantized model, if anyone has the resources to run them. I might be able to run a few slowly with a small sample size.

Let me know your thoughts on whether you prefer float16 vs bfloat16 for this. I've mostly heard that bfloat16 is just better overall, but seeing an order of magnitude higher average error makes me question that. If you think the bfloat16 version would be better, I can upload it.

Sign up or log in to comment