---
library_name: transformers
license: gemma
datasets:
- grimulkan/LimaRP-augmented
- LDJnr/Capybara
- TheSkullery/C2logs_Filtered_Sharegpt_Merged
- abacusai/SystemChat-1.1
- Hastagaras/FTTS-Stories-Sharegpt
tags:
- not-for-all-audiences
---
GGUF [STATIC](https://huggingface.co/mradermacher/Gemmoy-9B-G2-MK.3-GGUF)/[IMATRIX](https://huggingface.co/mradermacher/Gemmoy-9B-G2-MK.3-i1-GGUF) made available by [mradermacher](https://huggingface.co/mradermacher)
Chat Template:
```
user
{system}
{prompt}
model
{response}
```
ST Settings:
* [CONTEXT](https://huggingface.co/Hastagaras/Gemmoy-9B-MK.I/blob/main/Gemma2%20C.json)
* [INSTRUCT](https://huggingface.co/Hastagaras/Gemmoy-9B-MK.I/blob/main/Gemma2%20I.json) (The model was trained with diverse system prompts, make sure to write the system prompt based on your preference.)
```
ls=LoraConfig(
r = 128,
target_modules = ['q_proj', 'down_proj', 'up_proj', 'o_proj', 'v_proj', 'gate_proj', 'k_proj'],
lora_alpha = 32,
lora_dropout = 0.1,
bias = "none",
use_rslora = True,
)
```