uncensored
File size: 1,249 Bytes
576e8c6
 
 
 
 
 
 
 
 
 
 
 
c47853c
e86ab76
 
 
c47853c
e79cfc9
 
576e8c6
f92bcb1
 
 
 
576e8c6
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
---
datasets:
- gozfarb/ShareGPT_Vicuna_unfiltered
---
# Convert tools
https://github.com/practicaldreamer/vicuna_to_alpaca

# Training tool
https://github.com/oobabooga/text-generation-webui

ATM I'm using 2023.05.04v0 of the dataset and training full context.

# Notes:
So I will only be training 1 epoch, as full context 30b takes so long to train.
This 1 epoch will take me 8 days lol but luckily these LoRA feels fully functinal at epoch 1 as shown on my 13b one.
Also I will be uploading checkpoints almost everyday. I could train another epoch if there's enough want for it.

Update: Since I will not be training over 1 epoch @Aeala is training for the full 3 https://huggingface.co/Aeala/VicUnlocked-alpaca-half-30b-LoRA but it's half ctx if you care about that. Also @Aeala's just about done.

# How to test?
1. Download LLaMA-30B-HF if you have not: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
2. Download the checkpoint-xxxx folder you want and put it in the loras folder.
3. Load ooba: ```python server.py --listen --model LLaMA-30B-HF --load-in-8bit --chat --lora checkpoint-xxxx```
4. Select instruct and chose Vicuna-v1.1 template.


# Want to see it Training?
https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7