Neko-Institute-of-Science
commited on
Commit
•
f92bcb1
1
Parent(s):
bdc4345
Update usage instructions.
Browse filesSo in my tests I believe you do not have to overwrite your config files. Unless you cant choose your instruct template.
README.md
CHANGED
@@ -16,12 +16,10 @@ This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functina
|
|
16 |
Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.
|
17 |
|
18 |
# How to test?
|
19 |
-
1. Download LLaMA-30B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
|
20 |
-
2.
|
21 |
-
3.
|
22 |
-
4.
|
23 |
-
5. Load ooba: ```python server.py --listen --model vicuna-30b --load-in-8bit --chat --lora checkpoint-xxxx```
|
24 |
-
6. Instruct mode: Vicuna-v1, ooba will load Vicuna-v0 by defualt
|
25 |
|
26 |
|
27 |
# Want to see it Training?
|
|
|
16 |
Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.
|
17 |
|
18 |
# How to test?
|
19 |
+
1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
|
20 |
+
2. Download the checkpoint-xxxx folder you want and put it in the loras folder.
|
21 |
+
3. Load ooba: ```python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora checkpoint-xxxx```
|
22 |
+
4. Select instruct and chose Vicuna-v1.1 template.
|
|
|
|
|
23 |
|
24 |
|
25 |
# Want to see it Training?
|