Text Generation
Transformers
PyTorch
English
Chinese
llama
text-generation-inference
Inference Endpoints
GeneZC commited on
Commit
be77a39
β€’
1 Parent(s): fff25f1

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +50 -3
README.md CHANGED
@@ -1,3 +1,50 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ - zh
5
+ license: apache-2.0
6
+ library_name: transformers
7
+ datasets:
8
+ - EleutherAI/pile
9
+ - togethercomputer/RedPajama-Data-1T
10
+ - p208p2002/wudao
11
+ widget:
12
+ - text: <s> 4 + 3 =
13
+ ---
14
+ ## MiniMA-1-3B
15
+
16
+ πŸ“‘ [arXiv](https://arxiv.org/abs/2311.07052) | πŸ‘» [GitHub](https://github.com/GeneZC/MiniMA) | πŸ€— [HuggingFace-MiniMA-3B](https://huggingface.co/GeneZC/MiniMA-3B) | πŸ€— [HuggingFace-MiniChat-3B](https://huggingface.co/GeneZC/MiniChat-3B) | πŸ€– [ModelScope-MiniMA-3B](https://modelscope.cn/models/GeneZC/MiniMA-3B) | πŸ€– [ModelScope-MiniChat-3B](https://modelscope.cn/models/GeneZC/MiniChat-3B) | πŸ€— [HuggingFace-MiniChat-1.5-3B](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | πŸ€— [HuggingFace-MiniMA-2-3B](https://huggingface.co/GeneZC/MiniMA-2-3B) | πŸ€— [HuggingFace-MiniChat-2-3B](https://huggingface.co/GeneZC/MiniChat-2-3B) | πŸ€— [HuggingFace-MiniMA-2-1B](https://huggingface.co/GeneZC/MiniMA-2-1B) | πŸ€— [HuggingFace-MiniLoong-3B](https://huggingface.co/GeneZC/MiniLoong-3B) | πŸ€— [HuggingFace-MiniMix-2/4x3B](https://huggingface.co/GeneZC/MiniMix-2_4x3B)
17
+
18
+ ❗ Must comply with LICENSE of LLaMA-2 since it is derived from LLaMA-2.
19
+
20
+ <img src="./teaser_a.jpg" alt="teaser_a" width="700" />
21
+
22
+ **Standard Benchmarks**
23
+
24
+ |Method|TFLOPs|MMLU (5-shot)|CEval (5-shot)|DROP (3-shot)|HumanEval (0-shot)|BBH (3-shot)|GSM8K (8-shot)|
25
+ |--|--|--|--|--|--|--|--|
26
+ |Mamba-2.8B|4.6E9|25.58|24.74|15.72|7.32|29.37|3.49|
27
+ |ShearedLLaMA-2.7B|0.8E9|26.97|22.88|19.98|4.88|30.48|3.56|
28
+ |BTLM-3B|11.3E9|27.20|26.00|17.84|10.98|30.87|4.55|
29
+ |StableLM-3B|72.0E9|44.75|31.05|22.35|15.85|32.59|10.99|
30
+ |Qwen-1.8B|23.8E9|44.05|54.75|12.97|14.02|30.80|22.97|
31
+ |Phi-2-2.8B|159.9E9|56.74|34.03|30.74|46.95|44.13|55.42|
32
+ |LLaMA-2-7B|84.0E9|46.00|34.40|31.57|12.80|32.02|14.10|
33
+ ||
34
+ |MiniMA-3B|4.0E9|28.51|28.23|22.50|10.98|31.61|8.11|
35
+ |MiniChat-3B|4.0E9|38.40|36.48|22.58|18.29|31.36|29.72|
36
+ |MiniMA-2-1B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13|
37
+ |MiniMA-2-3B|13.4E9|40.14|44.65|23.10|14.63|31.43|8.87|
38
+ |MiniChat-2-3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13|
39
+ |MiniMix-2/4x3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13|
40
+
41
+ ## Bibtex
42
+
43
+ ```bibtex
44
+ @article{zhang2023law,
45
+ title={Towards the Law of Capacity Gap in Distilling Language Models},
46
+ author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan},
47
+ year={2023},
48
+ url={https://arxiv.org/abs/2311.07052}
49
+ }
50
+ ```