uncensored
Neko-Institute-of-Science's picture
will update checkpoints everyday.
c4856c5
|
raw
history blame
1 kB
metadata
datasets:
  - gozfarb/ShareGPT_Vicuna_unfiltered

Convert tools

https://github.com/practicaldreamer/vicuna_to_alpaca

Training tool

https://github.com/oobabooga/text-generation-webui

ATM I'm using 2023.05.04v0 of the dataset and training full context.

Notes:

So im only training 1 epoch as full context 30b takes a long time to train. My 1 epoch will take me 8 days lol but lucly the LoRA feels fully functinal at epoch 1 as shown on my 13b one. Also I will be uploading checkpoints almost everyday.

How to test?

  1. Download LLaMA-30B-HF: https://huggingface.co/Neko-Institute-of-Science/LLaMA-30B-HF
  2. Replace special_tokens_map.json and tokenizer_config.json using the ones on this repo.
  3. Rename LLaMA-30B-HF to vicuna-30b
  4. Load ooba: python server.py --listen --model vicuna-30b --load-in-8bit --chat --lora checkpoint-xxxx
  5. Instruct mode: Vicuna-v1 it will load Vicuna-v0 by defualt

Want to see it Training?

https://wandb.ai/neko-science/VicUnLocked/runs/vx8yzwi7