File size: 7,696 Bytes
bb11a33 2b079b2 53b019b 0695f87 bb11a33 068743a f74b172 7698e3e 068743a 7698e3e 068743a 7698e3e 068743a f74b172 068743a aac9da5 4e4bf51 d9aae8f 60adef1 2b079b2 6625d6d 2b079b2 9442df0 2b079b2 068743a 0dfb32e ea64f23 068743a 7d698b4 068743a 00ae028 068743a d3167df ee0b265 a8b6a90 12acc01 53b019b 5c6538b 4e4bf51 ee0b265 068743a ee0b265 6d1672c |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 |
---
datasets:
- c-s-ale/alpaca-gpt4-data
- Open-Orca/OpenOrca
- Intel/orca_dpo_pairs
- allenai/ultrafeedback_binarized_cleaned
language:
- en
license: cc-by-nc-4.0
base_model:
- upstage/SOLAR-10.7B-v1.0
---
# **Meet 10.7B Solar: Elevating Performance with Upstage Depth UP Scaling!**
**(This model is [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0) fine-tuned version for single-turn conversation.)**
# **Introduction**
We introduce the first 10.7 billion (B) parameter model, SOLAR-10.7B. It's compact, yet remarkably powerful, and demonstrates unparalleled state-of-the-art performance in models with parameters under 30B.
We developed the Depth Up-Scaling technique. Built on the Llama2 architecture, SOLAR-10.7B incorporates the innovative Upstage Depth Up-Scaling. We then integrated Mistral 7B weights into the upscaled layers, and finally, continued pre-training for the entire model.
Depth-Upscaled SOLAR-10.7B has remarkable performance. It outperforms models with up to 30B parameters, even surpassing the recent Mixtral 8X7B model. For detailed information, please refer to the experimental table.
Solar 10.7B is an ideal choice for fine-tuning. SOLAR-10.7B offers robustness and adaptability for your fine-tuning needs. Our simple instruction fine-tuning using the SOLAR-10.7B pre-trained model yields significant performance improvements.
For full details of this model please read our [paper](Coming Soon).
# **Instruction Fine-Tuning Strategy**
We utilize state-of-the-art instruction fine-tuning methods including supervised fine-tuning (SFT) and direct preference optimization (DPO) [1].
We used a mixture of the following datasets
- c-s-ale/alpaca-gpt4-data (SFT)
- Open-Orca/OpenOrca (SFT)
- in-house generated data utilizing Metamath [2] (SFT, DPO)
- Intel/orca_dpo_pairs (DPO)
- allenai/ultrafeedback_binarized_cleaned (DPO)
where we were careful of data contamination by not using GSM8K samples when generating data and filtering tasks when applicable via the following list.
```python
filtering_task_list = [
'task228_arc_answer_generation_easy',
'ai2_arc/ARC-Challenge:1.0.0',
'ai2_arc/ARC-Easy:1.0.0',
'task229_arc_answer_generation_hard',
'hellaswag:1.1.0',
'task1389_hellaswag_completion',
'cot_gsm8k',
'cot_gsm8k_ii',
'drop:2.0.0',
'winogrande:1.1.0'
]
```
Using the datasets mentioned above, we applied SFT and iterative DPO training, a proprietary alignment strategy, to maximize the performance of our resulting model.
[1] Rafailov, R., Sharma, A., Mitchell, E., Ermon, S., Manning, C.D. and Finn, C., 2023. Direct preference optimization: Your language model is secretly a reward model. NeurIPS.
[2] Yu, L., Jiang, W., Shi, H., Yu, J., Liu, Z., Zhang, Y., Kwok, J.T., Li, Z., Weller, A. and Liu, W., 2023. Metamath: Bootstrap your own mathematical questions for large language models. arXiv preprint arXiv:2309.12284.
# **Data Contamination Test Results**
Recently, there have been contamination issues in some models on the LLM leaderboard.
We note that we made every effort to exclude any benchmark-related datasets from training.
We also ensured the integrity of our model by conducting a data contamination test [3] that is also used by the HuggingFace team [4, 5].
Our results, with `result < 0.1, %:` being well below 0.9, indicate that our model is free from contamination.
*The data contamination test results of HellaSwag and Winograde will be added once [3] supports them.*
| Model | ARC | MMLU | TruthfulQA | GSM8K |
|------------------------------|-------|-------|-------|-------|
| **SOLAR-10.7B-Instruct-v1.0**| result < 0.1, %: 0.06 |result < 0.1, %: 0.15 | result < 0.1, %: 0.28 | result < 0.1, %: 0.70 |
[3] https://github.com/swj0419/detect-pretrain-code-contamination
[4] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/474#657f2245365456e362412a06
[5] https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard/discussions/265#657b6debf81f6b44b8966230
# **Evaluation Results**
| Model | H6 | Model Size |
|----------------------------------------|-------|------------|
| **SOLAR-10.7B-Instruct-v1.0** | **74.20** | **~ 11B** |
| mistralai/Mixtral-8x7B-Instruct-v0.1 | 72.62 | ~ 46.7B |
| 01-ai/Yi-34B-200K | 70.81 | ~ 34B |
| 01-ai/Yi-34B | 69.42 | ~ 34B |
| mistralai/Mixtral-8x7B-v0.1 | 68.42 | ~ 46.7B |
| meta-llama/Llama-2-70b-hf | 67.87 | ~ 70B |
| tiiuae/falcon-180B | 67.85 | ~ 180B |
| **SOLAR-10.7B-v1.0** | **66.04** | **~11B** |
| mistralai/Mistral-7B-Instruct-v0.2 | 65.71 | ~ 7B |
| Qwen/Qwen-14B | 65.86 | ~ 14B |
| 01-ai/Yi-34B-Chat | 65.32 | ~34B |
| meta-llama/Llama-2-70b-chat-hf | 62.4 | ~ 70B |
| mistralai/Mistral-7B-v0.1 | 60.97 | ~ 7B |
| mistralai/Mistral-7B-Instruct-v0.1 | 54.96 | ~ 7B |
# **Usage Instructions**
This model has been fine-tuned primarily for single-turn conversation, making it less suitable for multi-turn conversations such as chat.
### **Version**
Make sure you have the correct version of the transformers library installed:
```sh
pip install transformers==4.35.2
```
### **Loading the Model**
Use the following Python code to load the model:
```python
import torch
from transformers import AutoModelForCausalLM, AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained("Upstage/SOLAR-10.7B-Instruct-v1.0")
model = AutoModelForCausalLM.from_pretrained(
"Upstage/SOLAR-10.7B-Instruct-v1.0",
device_map="auto",
torch_dtype=torch.float16,
)
```
### **Conducting Single-Turn Conversation**
```python
conversation = [ {'role': 'user', 'content': 'Hello?'} ]
prompt = tokenizer.apply_chat_template(conversation, tokenize=False, add_generation_prompt=True)
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)
outputs = model.generate(**inputs, use_cache=True, max_length=4096)
output_text = tokenizer.decode(outputs[0])
print(output_text)
```
Below is an example of the output.
```
<s> ### User:
Hello?
### Assistant:
Hello, how can I assist you today? Please feel free to ask any questions or request help with a specific task.</s>
```
### **License**
- [upstage/SOLAR-10.7B-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-v1.0): apache-2.0
- [upstage/SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/upstage/SOLAR-10.7B-Instruct-v1.0): cc-by-nc-4.0
- Since some non-commercial datasets such as Alpaca are used for fine-tuning, we release this model as cc-by-nc-4.0.
### **How to Cite**
Please cite this model using this format.
```bibtex
@misc{upstage2023solar,
title={SOLAR 10.7B: Scaling Large Language Models with Simple yet Effective Depth Up-Scaling},
author={Dahyun Kim and Chanjun Park and Sanghoon Kim and Wonsung Lee and Wonho Song and Yunsu Kim and Hyeonwoo Kim and Yungi Kim and Hyeonju Lee and Jihoo Kim and Changbae Ahn and Seonghoon Yang and Sukyung Lee and Hyunbyung Park and Gyoungjin Gim and Mikyoung Cha and Hwalsuk Lee and Sunghun Kim},
year={2023},
eprint={Coming Soon},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
### **The Upstage AI Team** ###
Upstage is creating the best LLM and DocAI. Please find more information at https://upstage.ai
### **Contact Us** ###
Any questions and suggestions, please use the discussion tab. If you want to contact us directly, drop an email to [contact@upstage.ai](mailto:contact@upstage.ai) |