Post
2056
Releasing HQQ Llama-3.1-70b 4-bit quantized version! Check it out at
mobiuslabsgmbh/Llama-3.1-70b-instruct_4bitgs64_hqq.
Achieves 99% of the base model performance across various benchmarks! Details in the model card.
Achieves 99% of the base model performance across various benchmarks! Details in the model card.