Text Generation
GGUF
Chinese
Inference Endpoints
conversational
Edit model card

QuantFactory/Gemma-2-2b-Chinese-it-GGUF

This is quantized version of stvlynn/Gemma-2-2b-Chinese-it created using llama.cpp

Original Model Card

Gemma-2-2b-Chinese-it (Gemma-2-2b-中文)

Intro

Gemma-2-2b-Chinese-it used approximately 6.4k rows of ruozhiba dataset to fine-tune Gemma-2-2b-it.

Gemma-2-2b-中文使用了约6.4k弱智吧数据对Gemma-2-2b-it进行微调

Demo

Usage

see Google's doc:

google/gemma-2-2b-it


If you have any questions or suggestions, feel free to contact me.

Twitter @stv_lynn

Telegram @stvlynn

email i@stv.pm

Downloads last month
269
GGUF
Model size
2.61B params
Architecture
gemma2

2-bit

3-bit

4-bit

5-bit

6-bit

8-bit

Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for QuantFactory/Gemma-2-2b-Chinese-it-GGUF

Base model

google/gemma-2-2b
Quantized
(113)
this model

Datasets used to train QuantFactory/Gemma-2-2b-Chinese-it-GGUF