Update README.md
Browse files
README.md
CHANGED
@@ -21,7 +21,7 @@ python3 embedding_convert.py \
|
|
21 |
--meta_llama_pth_file /path_to_LLaMA/llama-7b/consolidated.00.pth
|
22 |
```
|
23 |
|
24 |
-
模型权重还原后,可通过以下代码调试运行:
|
25 |
```python
|
26 |
import torch
|
27 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
@@ -42,3 +42,9 @@ output_ids = output_ids[0][len(input_ids[0]):]
|
|
42 |
outputs = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
|
43 |
print(outputs)
|
44 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
--meta_llama_pth_file /path_to_LLaMA/llama-7b/consolidated.00.pth
|
22 |
```
|
23 |
|
24 |
+
BiLLa-7B-SFT模型权重还原后,可通过以下代码调试运行:
|
25 |
```python
|
26 |
import torch
|
27 |
from transformers import AutoTokenizer, AutoModelForCausalLM
|
|
|
42 |
outputs = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
|
43 |
print(outputs)
|
44 |
```
|
45 |
+
|
46 |
+
BiLLa-7B-SFT的模型输入需按以下格式构造(注意`Assistant:`后必须有一个空格):
|
47 |
+
```
|
48 |
+
Human: [Your question]
|
49 |
+
Assistant:
|
50 |
+
```
|