Text Generation
Transformers
Safetensors
qwen
custom_code
File size: 1,799 Bytes
588869f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
ea1276f
588869f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
e5ef42d
588869f
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
---
license: apache-2.0
datasets:
- silk-road/ChatHaruhi-Expand-118K
- silk-road/ChatHaruhi_NovelWriting
pipeline_tag: text-generation
---

本脚本是对千问7B模型的微调和测试,使得Qwen 7B能够有角色扮演的能力

This script fine-tunes and tests the Qwen 7B model to give Qwen 7B the capability of role playing.

项目链接 https://github.com/LC1332/Chat-Haruhi-Suzumiya 

- 118K训练数据由[李鲁鲁](https://github.com/LC1332)收集,

- 模型是由[豆角](https://github.com/goodnessSZW)训练的

- Qwen inference代码由[米唯实](https://github.com/hhhwmws0117)编写,并接入Chatharuhi,目前进行本模型维护和bug解决

- 李鲁鲁编写了ChatHaruhi内部的prompt组织函数


A Harry Potter test see in https://github.com/LC1332/Chat-Haruhi-Suzumiya/blob/main/notebook/Harry_Potter_test_on_Qwen7B.ipynb

使用方法

载入函数

```python
from transformers import AutoTokenizer, AutoModel, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("silk-road/ChatHaruhi_RolePlaying_qwen_7b", trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained("silk-road/ChatHaruhi_RolePlaying_qwen_7b", device_map="auto", trust_remote_code=True)
model = model.eval()
```

具体看https://github.com/LC1332/Chat-Haruhi-Suzumiya/blob/main/notebook/ChatHaruhi_x_Qwen7B.ipynb 这个notebook

```python
from ChatHaruhi import ChatHaruhi

chatbot = ChatHaruhi( role_name = 'haruhi', max_len_story = 1000 )

prompt = chatbot.generate_prompt(role='阿虚', text = '我看新一年的棒球比赛要开始了!我们要去参加吗?')

response, history = model.chat(tokenizer, prompt, history=[])
print(response)

chatbot.append_response(response)
```

目前支持
role_name

role_from_hf

role_from_jsonl

多种角色格式载入。