Edit model card

Mixtral 8x7B v0.1 Turkish

Description

This repo contains GGUF format model files for malhajar's Mixtral 8x7B v0.1 Turkish

Original model

Original model description

malhajar/Mixtral-8x7B-v0.1-turkish is a finetuned version of Mixtral-8x7B-v0.1 using SFT Training. This model can answer information in turkish language as it is finetuned on a turkish dataset specifically alpaca-gpt4-tr

Quantizon types

quantization method bits size description recommended
Q3_K_S 3 20.4 GB very small, high quality loss
Q3_K_L 3 26.4 GB small, substantial quality loss
Q4_0 4 26.4 GB legacy; small, very high quality loss
Q4_K_M 4 28.4 GB medium, balanced quality
Q5_0 5 33.2 GB legacy; medium, balanced quality
Q5_K_S 5 32.2 GB large, low quality loss
Q5_K_M 5 33.2 GB large, very low quality loss
Q6_K 6 38.4 GB very large, extremely low quality loss
Q8_0 8 49.6 GB very large, extremely low quality loss

Prompt Template

### Instruction:
<prompt> (without the <>)
### Response:
Downloads last month
111
GGUF
Model size
46.7B params
Architecture
llama

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for sayhan/Mixtral-8x7B-v0.1-turkish-GGUF

Quantized
(1)
this model

Collection including sayhan/Mixtral-8x7B-v0.1-turkish-GGUF