uncensored
Neko-Institute-of-Science commited on
Commit
c4856c5
1 Parent(s): c47853c

will update checkpoints everyday.

Browse files
Files changed (1) hide show
  1. README.md +1 -0
README.md CHANGED
@@ -13,6 +13,7 @@ ATM I'm using 2023.05.04v0 of the dataset and training full context.
13
  # Notes:
14
  So im only training 1 epoch as full context 30b takes a long time to train.
15
  My 1 epoch will take me 8 days lol but lucly the LoRA feels fully functinal at epoch 1 as shown on my 13b one.
 
16
 
17
  # How to test?
18
  1. Download LLaMA-30B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
 
13
  # Notes:
14
  So im only training 1 epoch as full context 30b takes a long time to train.
15
  My 1 epoch will take me 8 days lol but lucly the LoRA feels fully functinal at epoch 1 as shown on my 13b one.
16
+ Also I will be uploading checkpoints almost everyday.
17
 
18
  # How to test?
19
  1. Download LLaMA-30B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF