Edit model card

QuantFactory Banner

QuantFactory/gemma-2-9b-it-SimPO-rudpo-GGUF

This is quantized version of radm/gemma-2-9b-it-SimPO-rudpo created using llama.cpp

Original Model Card

Gemma-2-9b-it-SimPO-rudpo

Model Information

Improved quality of Russian language compared to the base model. Tested on arena-hard questions in Russian

gemma-2-9b-it-SimPO-rudpo           | score: 91.9  | 95% CI:   (-0.9, 0.9)   | average #tokens: 1013   <---- THIS MODEL
gemma-2-9b-it-SimPO                 | score: 90.9  | 95% CI:   (-0.9, 1.1)   | average #tokens: 1065
gemma-2-27b-it-FP8                  | score: 82.0  | 95% CI:   (-1.3, 1.5)   | average #tokens: 799
gemma-2-9b-it                       | score: 67.0  | 95% CI:   (-1.8, 1.7)   | average #tokens: 760
gemma-2-2b-it-abl-rudpo             | score: 61.6  | 95% CI:   (-1.7, 2.2)   | average #tokens: 1121
gemma-2-2b-it-abl                   | score: 48.8  | 95% CI:   (-1.9, 2.0)   | average #tokens: 783
gemma-2b-it                         | score:  8.8  | 95% CI:   (-1.1, 1.0)   | average #tokens: 425
Downloads last month
692
GGUF
Model size
9.24B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for QuantFactory/gemma-2-9b-it-SimPO-rudpo-GGUF

Base model

google/gemma-2-9b
Quantized
(24)
this model