upload README
Browse files
README.md
ADDED
@@ -0,0 +1,81 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
datasets:
|
3 |
+
- Open-Orca/SlimOrca
|
4 |
+
- ise-uiuc/Magicoder-OSS-Instruct-75K
|
5 |
+
- ise-uiuc/Magicoder-Evol-Instruct-110K
|
6 |
+
- meta-math/MetaMathQA
|
7 |
+
language:
|
8 |
+
- en
|
9 |
+
library_name: transformers
|
10 |
+
pipeline_tag: text-generation
|
11 |
+
arxiv: 2401.02731
|
12 |
+
---
|
13 |
+
|
14 |
+
|
15 |
+
# Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks
|
16 |
+
|
17 |
+
## News
|
18 |
+
- 1/10/2024 - Camelidae models are now available on [🤗HuggingFace](https://huggingface.co/hywu).
|
19 |
+
- 1/4/2024 - We released the paper, [Parameter-Efficient Sparsity Crafting From Dense to Mixture-of-Experts for Instruction Tuning on General Tasks](https://arxiv.org/abs/2401.02731).
|
20 |
+
- 12/22/2023 - We released the training [repo](https://github.com/wuhy68/Parameter-Efficient-MoE) that craft the dense model with LLaMA architecture to the MoE model.
|
21 |
+
|
22 |
+
## Introduction
|
23 |
+
Camelidae models are trained utilizing Parameter-Efficient Sparsity Crafting techniques
|
24 |
+
|
25 |
+
Parameter-Efficient Sparsity Crafting can help dense models learn knowledge from different fields (including code and math). This appraoch perfrom instruction tuning and utilize MoE structure in an efficient way.
|
26 |
+
|
27 |
+
Specifically, Parameter-Efficient Sparsity Crafting utilizes parameter efficient techiniques including [QLoRA](https://arxiv.org/abs/2305.14314) and [Adapter](https://arxiv.org/abs/1902.00751) to perfrom Efficient [Sparse Upcycling](https://arxiv.org/abs/2212.05055).
|
28 |
+
|
29 |
+
## Model Lists
|
30 |
+
| Model | Download
|
31 |
+
|---|---
|
32 |
+
Camelidae-8x7B | [🤗HuggingFace](https://huggingface.co/hywu/Camelidae-8x7B)
|
33 |
+
Camelidae-8x13B | [🤗HuggingFace](https://huggingface.co/hywu/Camelidae-8x13B)
|
34 |
+
Camelidae-8x34B | [🤗HuggingFace](https://huggingface.co/hywu/Camelidae-8x34B)
|
35 |
+
|
36 |
+
## Performance
|
37 |
+
| Model | MMLU (5shot) | GSM8k (5shot) | MATH (4shot) | HumanEval (0shot) | MBPP (4shot) | HellaSwag (10shot) | TriviaQA (0shot) |
|
38 |
+
|----------------------:|:------------:|:-------------:|:------------:|:-----------------:|:------------:|:------------------:|:----------------:|
|
39 |
+
| GPT3.5 | 70.0% | 57.1% | **34.1%** | **48.1%** | - | 85.5% | - |
|
40 |
+
| Camelidae-8x34B | 75.6% | **78.3%** | **22.6%** | **43.9%** | **41.4%** | 85.3% | **63.4%** |
|
41 |
+
| SUSChat-34B | **76.4%** | 72.3% | 22.0% | 11.6% | 40.2% | 83.9% | 56.1% |
|
42 |
+
| Mixtral-8x7B-instruct | 68.7% | 71.7% | 22.1% | 25.6% | 40.6% | **86.5%** | 57.7% |
|
43 |
+
| LLaMA2-70B-chat | 63.8% | 59.3% | 10.4% | 32.3% | 35.6% | 84.8% | 63.0% |
|
44 |
+
| Camelidae-8x13B | 54.4% | 52.6% | 9.8% | 30.6% | 30.4% | 82.5% | 59.4% |
|
45 |
+
| LLaMA2-13B-chat | 54.6% | 37.1% | 5.2% | 18.9% | 27.2% | 81.9% | 55.0% |
|
46 |
+
| Camelidae-8x7B | 48.3% | 44.0% | 5.8% | 18.3% | 23.4% | 79.2% | 51.0% |
|
47 |
+
| LLaMA2-7B-chat | 48.3% | 26.3% | 3.9% | 12.2% | 17.6% | 78.6% | 46.4% |
|
48 |
+
|
49 |
+
We bold the highest scores for open-source models and all models separately.
|
50 |
+
|
51 |
+
|
52 |
+
## Usage
|
53 |
+
```python
|
54 |
+
from transformers import AutoModelForCausalLM, AutoTokenizer
|
55 |
+
|
56 |
+
# tokenizer = AutoTokenizer.from_pretrained("hywu/Camelidae-8x7B", trust_remote_code=True)
|
57 |
+
# tokenizer = AutoTokenizer.from_pretrained("hywu/Camelidae-8x13B", trust_remote_code=True)
|
58 |
+
tokenizer = AutoTokenizer.from_pretrained("hywu/Camelidae-8x34B", trust_remote_code=True)
|
59 |
+
|
60 |
+
# model = AutoModelForCausalLM.from_pretrained("hywu/Camelidae-8x7B", device_map="auto", trust_remote_code=True).eval()
|
61 |
+
# model = AutoModelForCausalLM.from_pretrained("hywu/Camelidae-8x13B", device_map="auto", trust_remote_code=True).eval()
|
62 |
+
model = AutoModelForCausalLM.from_pretrained("hywu/Camelidae-8x34B", device_map="auto", trust_remote_code=True).eval()
|
63 |
+
|
64 |
+
inputs = tokenizer('### Human:\nHow are you?\n ### Assistant:\n', return_tensors='pt')
|
65 |
+
inputs = inputs.to(model.device)
|
66 |
+
pred = model.generate(**inputs)
|
67 |
+
print(tokenizer.decode(pred.cpu()[0], skip_special_tokens=True))
|
68 |
+
```
|
69 |
+
|
70 |
+
## Citation
|
71 |
+
```bibtex
|
72 |
+
@article{wu2024parameter,
|
73 |
+
title={Parameter-Efficient Sparsity Crafting from Dense to Mixture-of-Experts for Instruction Tuning on General Tasks},
|
74 |
+
author={Wu, Haoyuan and Zheng, Haisheng and Yu, Bei},
|
75 |
+
journal={arXiv preprint arXiv:2401.02731},
|
76 |
+
year={2024}
|
77 |
+
}
|
78 |
+
```
|
79 |
+
|
80 |
+
## License
|
81 |
+
The source code in this repo is licensed under the [Apache 2.0 License](https://github.com/wuhy68/Parameter-Efficient-MoE/blob/master/LICENSE). Camelidae models are developed for academic research and free commercial use, all usage must adhere to the license from [facebookresearch](https://github.com/facebookresearch/llama/blob/main/LICENSE) and [01-ai](https://github.com/01-ai/Yi/blob/main/MODEL_LICENSE_AGREEMENT.txt).
|