Model Information

This is a 4 bit quantized version of the Meta Llama-3.3-70B-Instruct multilingual large language model (LLM).

Model developer: Meta Quantization: Basic HQQ quantization. https://github.com/mobiusml/hqq Parameters: nbits=4, group_size=64

Quantized by Gábor Madarász

Downloads last month
4
Inference API
Unable to determine this model's library. Check the docs .

Model tree for GaborMadarasz/Llama-3.3-70-Instruct_HQQ_4bit

Finetuned
(102)
this model