Model Information
This is a 4 bit quantized version of the Meta Llama-3.3-70B-Instruct multilingual large language model (LLM).
Model developer: Meta Quantization: Basic HQQ quantization. https://github.com/mobiusml/hqq Parameters: nbits=4, group_size=64
Quantized by Gábor Madarász
- Downloads last month
- 4
Model tree for GaborMadarasz/Llama-3.3-70-Instruct_HQQ_4bit
Base model
meta-llama/Llama-3.1-70B
Finetuned
meta-llama/Llama-3.3-70B-Instruct