YuLan-Chat: An Open-Source Bilingual Chatbot
- Due to continued pre-training on high-quality Chinese-English bilingual data, the language ability of the model has been improved.
- To well support Chinese and longer inputs and outputs, we expand the original vocabulary with Chinese words and extend the maximum length of LLaMA-2. It can support 8k context now.
- To well activate the bilingual instruction following capacity, we construct high-quality bilingual instructions, and perform multi-stage instruction-tuning.
YuLan-Chat系列模型是中国人民大学高瓴人工智能学院师生共同开发的支持聊天的大语言模型(名字"玉兰"取自中国人民大学校花)。最新版本基于LLaMA-2进行了中英文双语的继续预训练和指令微调。该版模型具有如下技术特点:
- 由于在高质量中英双语数据上进行了继续预训练,模型的语言能力得到提高;
- 为了更好的支持中文和更长的输入输出,对原版LLaMA-2的词表及长度进行了扩充,目前可支持8k上下文;
- 为了让模型更好地服从用户指令,构建了高质量双语指令数据集,并行了多阶段指令微调。
Model Zoo
Due to the license limitation, for models based on LLaMA, we only provide the weight difference with the original checkpoints; for models based on LLaMA-2, they can be used directly. Please check the Usage section for more details.
Limitations: Despite our efforts to reduce potential security issues during the model's usage and encourage the generation of text that aligns with ethical and legal requirements, the language model is based on probabilistic generation, which means it may still produce unexpected outputs. For instance, the generated responses may contain biases, discrimination, or other harmful content. Please do not propagate such content. We do not assume any responsibility for any consequences resulting from the dissemination of harmful information.
由于许可证的限制,基于LLaMA的模型我们仅提供与官方模型的差值,基于LLaMA-2的模型可直接使用,具体请参见使用方法章节。
局限性:尽管我们尝试减少模型在使用中可能出现的安全性问题,并鼓励模型生成符合道德和法律要求的文本,但由于语言模型基于概率生成的范式,模型仍然可能会产生意外的输出。 例如,生成的响应可能包含偏见、歧视或其他有害内容。 请不要传播此类内容。 我们对因传播有害信息而造成的任何后果不承担任何责任。
Model | Backbone | Extended Vocab | Extended Length | Continue PT | SFT | Released Date |
---|---|---|---|---|---|---|
YuLan-Chat-2-13B | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ✅ | 2023.8.2 |
YuLan-LLaMA-2-13B | LLaMA2-13B | ✅ 51,190 | ✅ 8,192 | ✅ | ❌ | 2023.8.2 |
YuLan-Chat-1-65B-v2 | LLaMA-65B | ✅ 51,190 | ❌ 2,048 | ✅ | ✅ | 2023.8.2 |
YuLan-Chat-1-13B-v1 | LLaMA-13B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 |
YuLan-Chat-1-65B-v1 | LLaMA-65B | ❌ 32,000 | ❌ 2,048 | ❌ | ✅ | 2023.6.8 |
Evaluation
We evaluate our YuLan-Chat model on several Chinese and English benchmarks. The evaluation results are shown as follows.
我们在中英文的一些基准测试上对YuLan-Chat进行了评价,其结果如下。
MMLU
MMLU (Massive Multitask Language Understanding) is a benchmark designed to measure knowledge acquired during pretraining by evaluating models exclusively in zero-shot and few-shot settings.
MMLU是一个评估模型知识量的常用的英文基准测试集。
Model | STEM | Social Science | Humanities | Others | Avg. |
---|---|---|---|---|---|
YuLan-Chat-1-13B-v1 | 39.6 | 57.8 | 42.6 | 57.6 | 49.4 |
YuLan-Chat-1-65B-v1 | 49.2 | 71.7 | 57.7 | 66.7 | 61.3 |
YuLan-Chat-1-65B-v2 | 46.3 | 67.9 | 56.9 | 63.9 | 58.7 |
LLaMA-2-13B | 44.6 | 64.2 | 53.9 | 62.2 | 56.2 |
FlagAlpha/Llama2-Chinese-13b-Chat | 44.4 | 63.2 | 51.6 | 60.6 | 55.0 |
Linly-AI/Chinese-LLaMA-2-13B-hf | 43.6 | 62.7 | 49.8 | 61.6 | 54.4 |
YuLan-LLaMA-2-13B | 42.9 | 61.5 | 50.4 | 58.6 | 53.4 |
YuLan-Chat-2-13B | 45.3 | 66.7 | 53.8 | 62.8 | 57.2 |
C-Eval
C-Eval is a comprehensive Chinese evaluation suite for foundation models.
C-Eval是一个针对基石模型综合能力的中文基准测试集。
Model | STEM | Social Science | Humanities | Others | Avg. | Avg. (Hard) |
---|---|---|---|---|---|---|
YuLan-Chat-1-13B-v1 | 30.2 | 37.4 | 31.9 | 30.7 | 32.0 | 25.7 |
YuLan-Chat-1-65B-v1 | 37.7 | 46.1 | 36.8 | 38.0 | 39.2 | 31.1 |
YuLan-Chat-1-65B-v2 | 39.9 | 55.9 | 47.7 | 43.7 | 45.4 | 31.4 |
LLaMA-2-13B | 36.9 | 43.2 | 37.6 | 36.6 | 38.2 | 32.0 |
FlagAlpha/Llama2-Chinese-13b-Chat | 36.8 | 44.5 | 36.3 | 36.5 | 38.1 | 30.9 |
Linly-AI/Chinese-LLaMA-2-13B-hf | 33.7 | 44.8 | 36.6 | 36.5 | 37 | 27.7 |
YuLan-LLaMA-2-13B | 35.3 | 46.4 | 41.9 | 37.6 | 39.3 | 28.6 |
YuLan-Chat-2-13B | 38.9 | 49.7 | 45.0 | 40.8 | 42.6 | 32.2 |
AGI-Eval-Gaokao
AGI-Eval is a human-centric benchmark specifically designed to evaluate the general abilities of foundation models in tasks pertinent to human cognition and problem-solving. We use the sub-branch Chinese-Gaokao for evaluation.
AGI-Eval 是一个以人为中心的基准,专门设计用于评估基础模型在与人类认知和解决问题相关的任务中的一般能力。我们使用其中的"高考"分支进行评测。
Model | Avg. | Chinese | English | Geography | History | Biology | Chemistry | Physics | Math-QA | Math-Cloze |
---|---|---|---|---|---|---|---|---|---|---|
YuLan-Chat-1-13B-v1 | 24.3 | 22.4 | 60.1 | 27.6 | 25.5 | 21.9 | 30.0 | 8.0 | 21.1 | 1.7 |
YuLan-Chat-1-65B-v1 | 29.3 | 25.2 | 79.1 | 37.2 | 36.6 | 28.6 | 24.2 | 11.0 | 21.9 | 0.0 |
YuLan-Chat-1-65B-v2 | 37.9 | 31.4 | 80.4 | 50.8 | 56.6 | 33.3 | 29.0 | 32.0 | 24.4 | 0.8 |
LLaMA-2-13B | 32.7 | 27.2 | 72.2 | 36.2 | 43.0 | 26.2 | 32.4 | 30.0 | 26.2 | 0.9 |
FlagAlpha/Llama2-Chinese-13b-Chat | 31.6 | 26.4 | 70.6 | 35.2 | 38.7 | 28.1 | 28.0 | 29.5 | 25.6 | 2.5 |
Linly-AI/Chinese-LLaMA-2-13B-hf | 31.1 | 22.8 | 74.8 | 42.2 | 37.9 | 24.3 | 28.0 | 23.0 | 26.5 | 0.0 |
YuLan-LLaMA-2-13B | 34.2 | 25.2 | 70.3 | 43.2 | 48.5 | 30.0 | 29.5 | 31.0 | 28.5 | 1.7 |
YuLan-Chat-2-13B | 39.5 | 37.0 | 85.3 | 46.7 | 51.9 | 43.8 | 38.2 | 29.0 | 23.1 | 0.9 |
Usage
Import from Huggingface Transformers
As our model is trained based on LLaMA, it can be loaded in the same way as original LLaMA.
由于我们的模型是基于LLaMA开发的,可以使用与LLaMA相同的方法加载。
>>> from transformers import LlamaTokenizer, LlamaForCausalLM
>>> tokenizer = LlamaTokenizer.from_pretrained("yulan-team/YuLan-Chat-2-13b")
>>> model = LlamaForCausalLM.from_pretrained("yulan-team/YuLan-Chat-2-13b").cuda()
>>> model = model.eval()
>>> input_text = "hello"
>>> prompt = "The following is a conversation between a human and an AI assistant namely YuLan, developed by GSAI, Renmin University of China. The AI assistant gives helpful, detailed, and polite answers to the user's questions.\n[|Human|]:{}\n[|AI|]:".format(input_text)
>>> inputs = tokenizer(prompt, return_tensors='pt', padding="longest", max_length=8192, truncation=True, return_attention_mask=True, add_special_tokens=True)
>>> kwargs = {'temperature': 0.8, 'top_p': 0.95, "top_k": 50, "repetition_penalty": 1.1, "no_repeat_ngram_size": 64, "max_length": 8192, "pad_token_id": tokenizer.bos_token_id, "eos_token_id": tokenizer.eos_token_id}
>>> outputs = model.generate(inputs['input_ids'].to(model.device), attention_mask=inputs['attention_mask'].to(model.device), do_sample=True, **kwargs)
>>> print(tokenizer.batch_decode(outputs, skip_special_tokens=True)[len(prompt):])
Hello! How can I assist you today?
License
YuLan-Chat uses MIT License. All data and code in this project can only be used for academic purposes.
本项目使用MIT许可,所有的数据和代码仅供学术研究使用。
Contributors
Pre-training | Fine-tuning |
---|---|
Yutao Zhu (Lead), Kelong Mao, Wentong Chen, Yiding Sun, Yihan Wu, Qian Cao, Lei Zhang, Feng Wang, Qiangqiang Ren | Kun Zhou (Lead), Yushuo Chen, Zhipeng Chen, Lei Wang, Yupeng Hou, Xincheng Pang, Junyi Li, Yuhan Chen, Shufang Xie |
Reference
Please kindly cite our work if it helps you.
如果我们的项目对您有帮助,请引用我们,谢谢!
@misc{YuLan-Chat,
author = {YuLan-Team},
title = {YuLan-Chat: An Open-Source Bilingual Chatbot},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/RUC-GSAI/YuLan-Chat}},
}
- Downloads last month
- 1,027