Edit model card

General Use Sampling:
Mistral-Nemo-12B is very sensitive to the temperature sampler, try values near 0.3 at first or else you will get some weird results. This is mentioned by MistralAI at their Transformers section.

Best Samplers:
I found best success using the following for Nemo-12B-Marlin-v5:
Temperature: 0.7-0.8
Top K: -1
Min P: 0.05
Rep Penalty: 1.03 (Note, it is recommended to increase this as context length increases, I find 1.10 to be good at 16k+ context)

Currently this is my favorite Mistral-Nemo finetune to be released.

Original Model: UsernameJustAnother/Nemo-12B-Marlin-v5 (Thank you so much for your work ♥)

Official Quants: UsernameJustAnother/Nemo-12B-Marlin-v5-gguf (Currently only Q8_0)

How to Use: llama.cpp

Original Model License: Apache 2.0

Release Used: b3538

Quants

PPL = Perplexity, lower is better
Comparisons are done as QX_X Llama-3-8B against FP16 Llama-3-8B, recommended as a guideline and not as fact.

Quant Type Note Size
Q2_K +3.5199 ppl @ Llama-3-8B 4.79 GB
Q3_K_S +1.6321 ppl @ Llama-3-8B 5.53 GB
Q3_K_M +0.6569 ppl @ Llama-3-8B 6.08 GB
Q3_K_L +0.5562 ppl @ Llama-3-8B 6.56 GB
Q4_K_S +0.2689 ppl @ Llama-3-8B 7.12 GB
Q4_K_M +0.1754 ppl @ Llama-3-8B 7.48 GB
Q5_K_S +0.1049 ppl @ Llama-3-8B 8.52 GB
Q5_K_M +0.0569 ppl @ Llama-3-8B 8.73 GB
Q6_K +0.0217 ppl @ Llama-3-8B 10.1 GB
Q8_0 +0.0026 ppl @ Llama-3-8B 13.00 GB
Downloads last month
278
GGUF
Model size
12.2B params
Architecture
llama

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for starble-dev/Nemo-12B-Marlin-v5-GGUF

Space using starble-dev/Nemo-12B-Marlin-v5-GGUF 1