Uploaded model

  • Developed by: qingy2024
  • License: apache-2.0
  • Finetuned from model : unsloth/gemma-2-2b-bnb-4bit

Note: This model uses a custom chat template:

Below is the original text. Please rewrite it to correct any grammatical errors if any, improve clarity, and enhance overall readability.

### Original Text:
{PROMPT HERE}

### Corrected Text:
{MODEL'S OUTPUT HERE}

I would recommend a temperature of 0.0 and repeat penalty 1.0 for this model to get optimal results.

Downloads last month
108
GGUF
Model size
2.61B params
Architecture
gemma2

8-bit

16-bit

Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for qingy2024/GRMR-2B-Instruct-GGUF

Base model

google/gemma-2-2b
Quantized
(36)
this model

Collection including qingy2024/GRMR-2B-Instruct-GGUF