--- language: - ko library_name: transformers pipeline_tag: text-generation --- **Model Developers** HyunseokLee, TaeyoungKim - (kaist alinlab, omnious.ai) **Input** Models input text only. **Output** Models generate text only. **Model Architecture** ko-ref-llama2-7b is an auto-regressive language model based on the LLaMA2 transformer architecture. **Base Model** Llama-2-7B **Training Dataset** Open dataset (Korean). **Training Objective** We trained the model to learn Korean corpus. # [Open LLM Leaderboard Evaluation Results](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard) Detailed results can be found [here](https://huggingface.co/datasets/open-llm-leaderboard/details_hyunseoki__ko-ref-llama2-7b) | Metric | Value | |-----------------------|---------------------------| | Avg. | 38.36 | | ARC (25-shot) | 42.66 | | HellaSwag (10-shot) | 66.58 | | MMLU (5-shot) | 30.41 | | TruthfulQA (0-shot) | 38.62 | | Winogrande (5-shot) | 66.22 | | GSM8K (5-shot) | 0.0 | | DROP (3-shot) | 24.05 |