Vikhrmodels/Vikhr-Nemo-12B-Instruct-R-21-09-24-4Bit-GPTQ

Quantization

  • This model was quantized with the Auto-GPTQ library and dataset containing english and russian wikipedia articles. It has lower perplexity on russian data then other GPTQ models.
Downloads last month
627
Safetensors
Model size
2.8B params
Tensor type
I32
·
FP16
·
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for qilowoq/Vikhr-Nemo-12B-Instruct-R-21-09-24-4Bit-GPTQ