Update README.md
Browse files
README.md
CHANGED
@@ -19,9 +19,9 @@ Huge thanks to [@mradermacher](https://huggingface.co/mradermacher) and [@bartow
|
|
19 |
|
20 |
Bartowski quants (imatrix): [bartowski/Gemma-2-Ataraxy-9B-GGUF](https://huggingface.co/bartowski/Gemma-2-Ataraxy-9B-GGUF)
|
21 |
|
22 |
-
Mradermacher quants (static): [mradermacher/Gemma-2-Ataraxy-9B-GGUF](https://huggingface.co/
|
23 |
|
24 |
-
Mradermacher quants (imatrix): [mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-GGUF)
|
25 |
|
26 |
I think bartowski and mradermacher use different calibration data for imatrix quants, or maybe you prefer static quants. Pick your poison :).
|
27 |
|
@@ -29,6 +29,16 @@ I think bartowski and mradermacher use different calibration data for imatrix qu
|
|
29 |
|
30 |
Use Gemma 2 format.
|
31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
32 |
## Preface and Rambling
|
33 |
|
34 |
My favorite Gemma 2 9B models are the SPPO iter3 and SimPO finetunes, but I felt the slerp merge between the two (nephilim v3) wasn't as good for some reason. The Gutenberg Gemma 2 finetune by nbeerbower is another my favorites. It's trained on one of my favorite datasets, and actually improves the SPPO model's average openllm leaderboard 2 average score by a bit, on top of improving it's writing capabilities and making the LLM sound less AI-like. However I still liked the original SPPO finetune just a bit more.
|
|
|
19 |
|
20 |
Bartowski quants (imatrix): [bartowski/Gemma-2-Ataraxy-9B-GGUF](https://huggingface.co/bartowski/Gemma-2-Ataraxy-9B-GGUF)
|
21 |
|
22 |
+
Mradermacher quants (static): [mradermacher/Gemma-2-Ataraxy-9B-GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-GGUF)
|
23 |
|
24 |
+
Mradermacher quants (imatrix): [mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF](https://huggingface.co/mradermacher/Gemma-2-Ataraxy-9B-i1-GGUF)
|
25 |
|
26 |
I think bartowski and mradermacher use different calibration data for imatrix quants, or maybe you prefer static quants. Pick your poison :).
|
27 |
|
|
|
29 |
|
30 |
Use Gemma 2 format.
|
31 |
|
32 |
+
## Benchmarks and Leaderboard Rankings
|
33 |
+
|
34 |
+
OpenLLM: Pending in Queue
|
35 |
+
|
36 |
+
Creative Writing V2: Rank 1. That's right, much to everyone's surprise (mine included) this model has topped eqbench.com's creative writing benchmark.
|
37 |
+
|
38 |
+
![Reddit](https://i.imgur.com/aP03a5d.png)
|
39 |
+
|
40 |
+
![Leaderboard](https://i.imgur.com/gJd9Pab.png)
|
41 |
+
|
42 |
## Preface and Rambling
|
43 |
|
44 |
My favorite Gemma 2 9B models are the SPPO iter3 and SimPO finetunes, but I felt the slerp merge between the two (nephilim v3) wasn't as good for some reason. The Gutenberg Gemma 2 finetune by nbeerbower is another my favorites. It's trained on one of my favorite datasets, and actually improves the SPPO model's average openllm leaderboard 2 average score by a bit, on top of improving it's writing capabilities and making the LLM sound less AI-like. However I still liked the original SPPO finetune just a bit more.
|