File size: 1,487 Bytes
203fd9b |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 |
---
license: cc-by-4.0
datasets:
- silk-road/ChatHaruhi-Expand-118K
language:
- zh
- en
pipeline_tag: text-generation
tags:
- text-generation-inference
---
本脚本是对千问1.8B模型的微调和测试,使得Qwen 1.8B能够有角色扮演的能力
This script fine-tunes and tests the Qwen 1.8B model to give Qwen 1.8B the capability of role playing.
- 118K训练数据由李鲁鲁收集,
- 模型是由[豆角](https://github.com/goodnessSZW)训练的
- Qwen inference代码由米唯实编写,
- 李鲁鲁编写了ChatHaruhi内部的prompt组织函数
使用方法
载入函数
```python
from transformers import AutoTokenizer, AutoModel, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("silk-road/Chat-Haruhi_qwen_1_8", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("silk-road/Chat-Haruhi_qwen_1_8", trust_remote_code=True).half().cuda()
model = model.eval()
```
具体看https://github.com/LC1332/Chat-Haruhi-Suzumiya/blob/main/notebook/ChatHaruhi_x_Qwen1_8B.ipynb 这个notebook
```python
from ChatHaruhi import ChatHaruhi
chatbot = ChatHaruhi( role_name = 'haruhi', max_len_story = 1000 )
prompt = chatbot.generate_prompt(role='阿虚', text = '我看新一年的棒球比赛要开始了!我们要去参加吗?')
response, history = model.chat(tokenizer, prompt, history=[])
print(response)
chatbot.append_response(response)
```
目前支持
role_name
role_from_hf
role_from_jsonl
多种角色格式载入。
|