Edit model card

(주)미디어그룹사람과숲과 (주)마커의 LLM 연구 컨소시엄에서 개발된 모델입니다
The license is cc-by-nc-sa-4.0.

KoT-platypus2

img
CoT + KO-platypus2 = KoT-platypus2

Model Details

Model Developers Kyujin Han (kyujinpy)

Input Models input text only.

Output Models generate text only.

Model Architecture
KoT-platypus2-7B is an auto-regressive language model based on the LLaMA2 transformer architecture.

Repo Link
Github KoT-platypus: KoT-platypus2

Base Model
KO-Platypus2-7B-ex
More detail repo(Github): CoT-llama2
More detail repo(Github): KO-Platypus2

Training Dataset
I use KoCoT_2000.
Using DeepL, translate about kaist-CoT.

I use A100 GPU 40GB and COLAB, when trianing.

Training Hyperparameters

Hyperparameters Value
batch_size 64
micro_batch_size 1
Epochs 15
learning_rate 1e-5
cutoff_len 4096
lr_scheduler linear
base_model kyujinpy/KO-Platypus2-7B-ex

Model Benchmark

LM Eval Harness - Korean (polyglot branch)

Question Answering (QA)

COPA (F1)

Model 0-shot 5-shot 10-shot 50-shot
Polyglot-ko-1.3b 0.7196 0.7193 0.7204 0.7206
Polyglot-ko-3.8b 0.7595 0.7608 0.7638 0.7788
Polyglot-ko-5.8b 0.7745 0.7676 0.7775 0.7887
Polyglot-ko-12.8b 0.7937 0.8108 0.8037 0.8369
Llama-2-Ko-7b 20B 0.7388 0.7626 0.7808 0.7979
Llama-2-Ko-7b 40B 0.7436 0.7927 0.8037 0.8259
KO-platypus2-7B-EX 0.7509 0.7899 0.8029 0.8290
KoT-platypus2-7B(ours) 0.7517 0.7868 0.8009 0.8239

Natural Language Inference (NLI; 자연어 추론 평가)

HellaSwag (F1)

Model 0-shot 5-shot 10-shot 50-shot
Polyglot-ko-1.3b 0.5247 0.5260 0.5278 0.5427
Polyglot-ko-3.8b 0.5707 0.5830 0.5670 0.5787
Polyglot-ko-5.8b 0.5976 0.5998 0.5979 0.6208
Polyglot-ko-12.8b 0.5954 0.6306 0.6098 0.6118
Llama-2-Ko-7b 20B 0.4518 0.4668 0.4726 0.4828
Llama-2-Ko-7b 40B 0.4562 0.4657 0.4698 0.4774
KO-platypus2-7B-EX 0.4571 0.4461 0.4371 0.4525
KoT-platypus2-7B(ours) 0.4432 0.4382 0.4550 0.4534

Question Answering (QA)

BoolQ (F1)

Model 0-shot 5-shot 10-shot 50-shot
Polyglot-ko-1.3b 0.3552 0.4751 0.4109 0.4038
Polyglot-ko-3.8b 0.4320 0.5263 0.4930 0.4038
Polyglot-ko-5.8b 0.4356 0.5698 0.5187 0.5236
Polyglot-ko-12.8b 0.4818 0.6041 0.6289 0.6448
Llama-2-Ko-7b 20B 0.3607 0.6797 0.6801 0.6622
Llama-2-Ko-7b 40B 0.5786 0.6977 0.7084 0.7144
KO-platypus2-7B-EX 0.6028 0.6979 0.7016 0.6988
KoT-platypus2-7B(ours) 0.6142 0.6757 0.6839 0.6878

Classification

SentiNeg (F1)

Model 0-shot 5-shot 10-shot 50-shot
Polyglot-ko-1.3b 0.6790 0.6257 0.5514 0.7851
Polyglot-ko-3.8b 0.4858 0.7950 0.7320 0.7851
Polyglot-ko-5.8b 0.3394 0.8841 0.8808 0.9521
Polyglot-ko-12.8b 0.9117 0.9015 0.9345 0.9723
Llama-2-Ko-7b 20B 0.4855 0.8295 0.8711 0.8513
Llama-2-Ko-7b 40B 0.4594 0.7611 0.7276 0.9370
KO-platypus2-7B-EX 0.5821 0.7653 0.7991 0.8643
KoT-platypus2-7B(ours) 0.6127 0.7199 0.7531 0.8381

Implementation Code

### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch

repo = "kyujinpy/KoT-platypus2-7B"
CoT-llama = AutoModelForCausalLM.from_pretrained(
        repo,
        return_dict=True,
        torch_dtype=torch.float16,
        device_map='auto'
)
CoT-llama_tokenizer = AutoTokenizer.from_pretrained(repo)

Readme format: beomi/llama-2-ko-7b


Downloads last month
4,336
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train kyujinpy/KoT-platypus2-7B