Edit model card

Latxa 7b Instruct GGUF

Provided files

Name Quant method Bits Size Max RAM required Use case
latxa-7b-v1-instruct-q8_0.gguf 8 bits 7 GB 8,2 GB Fits in a RTX 3060 12Gb
Downloads last month
3
GGUF
Model size
6.74B params
Architecture
llama
Inference Examples
Inference API (serverless) is not available, repository is disabled.

Model tree for oldbridge/latxa-7b-instruct-q8

Base model

HiTZ/latxa-7b-v1
Quantized
this model