language:
- en
- zh
- ja
- yue
tags:
- causal-lm
- llama
license: cc-by-nc-sa-4.0
datasets:
- OpenAssistant/oasst1
- nomic-ai/gpt4all_prompt_generations
- tatsu-lab/alpaca
StableVicuna-13B
Model Description
StableVicuna-13B is a Vicuna-13B v0 model fine-tuned using reinforcement learning from human feedback (RLHF) via Proximal Policy Optimization (PPO) on various conversational and instructional datasets.
Apply Delta Weights
StableVicuna-13B cannot be used from the CarperAI/stable-vicuna-13b-delta
weights alone. To obtain the correct model, one must add back the difference between LLaMA 13B and CarperAI/stable-vicuna-13b-delta
weights. We provide the apply_delta.py
script to automate the conversion, which you can run as:
python3 apply_delta.py --base /path/to/model_weights/llama-13b --target stable-vicuna-13b --delta CarperAI/stable-vicuna-13b-delta
Usage
Once the delta weights are applied, get started chatting with the model by using the transformers
library. Following a suggestion from Vicuna Team with Vicuna v0 you should install transformers with this version:
pip install git+https://github.com/huggingface/transformers@c612628045822f909020f7eb6784c79700813eda
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("path/to/stable-vicuna-13b-applied")
model = AutoModelForCausalLM.from_pretrained("path/to/stable-vicuna-13b-applied")
model.half().cuda()
prompt = """\
### Human: Write a Python script for text classification using Transformers and PyTorch
### Assistant:\
"""
inputs = tokenizer(prompt, return_tensors='pt').to('cuda')
tokens = model.generate(
**inputs,
max_new_tokens=256,
do_sample=True,
temperature=1.0,
top_p=1.0,
)
print(tokenizer.decode(tokens[0], skip_special_tokens=True))
Model Details
- Trained by: Duy Phung of CarperAI
- Model type: StableVicuna-13B is an auto-regressive language model based on the LLaMA transformer architecture.
- Language(s): English
- Library: trlX
- License for delta weights: CC-BY-NC-SA-4.0
- Note: License for the base LLaMA model's weights is Meta's non-commercial bespoke license.
- Contact: For questions and comments about the model, visit the CarperAI and StableFoundation Discord servers.
Hyperparameter | Value |
---|---|
13B | |
5120 | |
40 | |
40 |
Training
Training Dataset
StableVicuna-13B is fine-tuned on a mix of three datasets. OpenAssistant Conversations Dataset (OASST1), a human-generated, human-annotated assistant-style conversation corpus consisting of 161,443 messages distributed across 66,497 conversation trees, in 35 different languages; GPT4All Prompt Generations, a dataset of 400k prompts and responses generated by GPT-4; and Alpaca, a dataset of 52,000 instructions and demonstrations generated by OpenAI's text-davinci-003 engine.
The reward model used during RLHF was also trained on OpenAssistant Conversations Dataset (OASST1) along with two other datasets: Anthropic HH-RLHF, a dataset of preferences about AI assistant helpfulness and harmlessness; and Stanford Human Preferences Dataset a dataset of 385K collective human preferences over responses to questions/instructions in 18 different subject areas, from cooking to legal advice.
Training Procedure
CarperAI/stable-vicuna-13b-delta
was trained using PPO as implemented in trlX
with the following configuration:
Hyperparameter | Value |
---|---|
num_rollouts | 128 |
chunk_size | 16 |
ppo_epochs | 4 |
init_kl_coef | 0.1 |
target | 6 |
horizon | 10000 |
gamma | 1 |
lam | 0.95 |
cliprange | 0.2 |
cliprange_value | 0.2 |
vf_coef | 1.0 |
scale_reward | None |
cliprange_reward | 10 |
generation_kwargs | |
max_length | 512 |
min_length | 48 |
top_k | 0.0 |
top_p | 1.0 |
do_sample | True |
temperature | 1.0 |
Use and Limitations
Intended Use
This model is intended to be used for text generation with a focus on conversational tasks. Users may further fine-tune the model on their own data to improve the model's performance on their specific tasks in accordance with the non-commercial license.
Limitations and bias
The base LLaMA model is trained on various data, some of which may contain offensive, harmful, and biased content that can lead to toxic behavior. See Section 5.1 of the LLaMA paper. We have not performed any studies to determine how fine-tuning on the aforementioned datasets affect the model's behavior and toxicity. Do not treat chat responses from this model as a substitute for human judgment or as a source of truth. Please use responsibly.
Acknowledgements
This work would not have been possible without the support of Stability AI.
Citations
@article{touvron2023llama,
title={LLaMA: Open and Efficient Foundation Language Models},
author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume},
journal={arXiv preprint arXiv:2302.13971},
year={2023}
}
@misc{vicuna2023,
title = {Vicuna: An Open-Source Chatbot Impressing GPT-4 with 90%* ChatGPT Quality},
url = {https://vicuna.lmsys.org},
author = {Chiang, Wei-Lin and Li, Zhuohan and Lin, Zi and Sheng, Ying and Wu, Zhanghao and Zhang, Hao and Zheng, Lianmin and Zhuang, Siyuan and Zhuang, Yonghao and Gonzalez, Joseph E. and Stoica, Ion and Xing, Eric P.},
month = {March},
year = {2023}
}
@misc{gpt4all,
author = {Yuvanesh Anand and Zach Nussbaum and Brandon Duderstadt and Benjamin Schmidt and Andriy Mulyar},
title = {GPT4All: Training an Assistant-style Chatbot with Large Scale Data Distillation from GPT-3.5-Turbo},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/nomic-ai/gpt4all}},
}
@misc{alpaca,
author = {Rohan Taori and Ishaan Gulrajani and Tianyi Zhang and Yann Dubois and Xuechen Li and Carlos Guestrin and Percy Liang and Tatsunori B. Hashimoto },
title = {Stanford Alpaca: An Instruction-following LLaMA model},
year = {2023},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/tatsu-lab/stanford_alpaca}},
}
@software{leandro_von_werra_2023_7790115,
author = {Leandro von Werra and
Alex Havrilla and
Max reciprocated and
Jonathan Tow and
Aman cat-state and
Duy V. Phung and
Louis Castricato and
Shahbuland Matiana and
Alan and
Ayush Thakur and
Alexey Bukhtiyarov and
aaronrmm and
Fabrizio Milo and
Daniel and
Daniel King and
Dong Shin and
Ethan Kim and
Justin Wei and
Manuel Romero and
Nicky Pochinkov and
Omar Sanseviero and
Reshinth Adithyan and
Sherman Siu and
Thomas Simonini and
Vladimir Blagojevic and
Xu Song and
Zack Witten and
alexandremuzio and
crumb},
title = {{CarperAI/trlx: v0.6.0: LLaMa (Alpaca), Benchmark
Util, T5 ILQL, Tests}},
month = mar,
year = 2023,
publisher = {Zenodo},
version = {v0.6.0},
doi = {10.5281/zenodo.7790115},
url = {https://doi.org/10.5281/zenodo.7790115}
}