Bartowski! Let's see how your imatrix differs from mine. π
@bartowski I've been thinking about our conversation about quantization and imatrices. And, I have been experimenting with f16.Q8 GGUFs of Hermes-3-Llama-3.1-8B. I don't like the f16. I much prefer bf16.Q8 GGUF; however llama.cpp doesn't support optimized inference using bf16 for metal (Apple M1, M2, M3, ... MN), so out of sheer curiosity I converted Hermes-3-Llama-3.1-8B to F32 and computed the imatrix for it using your calibration dataset. I ran a diff against my F32.imatrix vs your imatrix, and they differ. I wonder if an F32 will give much more closely following activations - even more so than f16? Here goes... π
Here's my F32.imatrix of Hermes-3-Llama-3.1-8B: https://huggingface.co/Joseph717171/Imatrices/resolve/main/Hermes-3-Llama8B-F32.imatrix (I computed this on a RTX A6000 RunPod instance)
Final estimate: PPL = 8.0472 +/- 0.12701
I only ran ./llama-imatrix -m "$model" -f "calibration_datav3.txt" -fa -ngl 10000 -o $model/Hermes-3-Llama8B-F32.imatrix
Let me know what you find out. I want to know why they differ. π
Happy testing π
@bartowski from my initial playing around with a Q8_0.IQ6_K GGUF, it appears as though my hypothesis is holding true: training the imatrix on the F32 of the model instead of the f16 (because Hermes-3 uses bf16, not f16), yields superior outputs. What this means: is training an imatrix on the F32 of a model whose datatype is bf16, yields better outputs because the F32 captures all of the values from model's bf16, whereas like we have said in previous discussions, f16's don't always.
Please, test my imatrix and let me know your thoughts. I value our discussions, and I welcome your skepticisms/criticisms. π
diff $a $b
Binary files /Users/jsarnecki/opt/Workspace/NousResearch/Hermes-3-Llama-3.1-8B/Hermes-3-Llama-3.1-8B-Q8_0.IQ8_0.gguf and /Users/jsarnecki/opt/Models/LLAMA 3.1/NousResearch/Hermes-3-Llama-3.1-8B/Hermes-3-Llama-3.1-8B-Q8_0.Q8_0.gguf differ
The Q8_0.IQ8_0 GGUF differs from the Q8_0.Q8_0 GGUF, so I think imatrices work to train Q8_0 as well! π
The differences in Q8 would more likely be from the conversion step, since in the code imatrix is explicitly disabled when doing Q8 :)
I'd be very curious about some KLD measurements, that's the main thing I'll look at, did you upload your files?
I had done some KLD tests to compare F32 to F16, and while there were differences, they were in the order of 0.1% or less, and they swung in both directions pretty regularly, so I concluded that the cost/benefit of F32/bf16 was not worth it, and I think I'm fine to stand by that overall
I am curious about potential differences in important weights, I know @compilade is loosely toying with a way to compare imatrix files that would potentially provide some insight
overall though I don't think there's much if anything to be gained by imatrixing at F32 or BF16
The differences in Q8 would more likely be from the conversion step, since in the code imatrix is explicitly disabled when doing Q8 :)
I'd be very curious about some KLD measurements, that's the main thing I'll look at, did you upload your files?
Hereβs the F32.imatrix:
https://huggingface.co/Joseph717171/Imatrices/resolve/main/Hermes-3-Llama8B-F32.imatrix
Q8_0.IQ8_0 GGUF of Hermes-3-Llama-3.1-8Bπ