Text Generation
Transformers
PyTorch
English
Chinese
llama
Eval Results
text-generation-inference
Inference Endpoints
GeneZC commited on
Commit
a743770
β€’
1 Parent(s): 4a54cb4

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +77 -0
README.md CHANGED
@@ -1,3 +1,80 @@
1
  ---
2
  license: apache-2.0
 
 
 
 
 
 
 
 
 
 
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: apache-2.0
3
+ datasets:
4
+ - EleutherAI/pile
5
+ - togethercomputer/RedPajama-Data-1T
6
+ - p208p2002/wudao
7
+ language:
8
+ - en
9
+ - zh
10
+ library_name: transformers
11
+ widget:
12
+ - text: "<s> 4 + 3 ="
13
  ---
14
+
15
+ ## MiniMA-3B
16
+
17
+ πŸ“‘ [arXiv](https://arxiv.org/abs/2311.07052) | πŸ‘» [GitHub](https://github.com/GeneZC/MiniMA) | πŸ€— [HuggingFace-MiniMA](https://huggingface.co/GeneZC/MiniMA-3B) | πŸ€— [HuggingFace-MiniChat](https://huggingface.co/GeneZC/MiniChat-3B) | πŸ€– [ModelScope-MiniMA](https://modelscope.cn/models/GeneZC/MiniMA-3B) | πŸ€– [ModelScope-MiniChat](https://modelscope.cn/models/GeneZC/MiniChat-3B) | πŸ€— [HuggingFace-MiniChat-1.5](https://huggingface.co/GeneZC/MiniChat-1.5-3B) | πŸ€— [HuggingFace-MiniMA-2](https://huggingface.co/GeneZC/MiniMA-2-3B) | πŸ€— [HuggingFace-MiniChat-2](https://huggingface.co/GeneZC/MiniChat-2-3B)
18
+
19
+ ❗ Must comply with LICENSE of LLaMA-2 since it is derived from LLaMA-2.
20
+
21
+ A language model continued from MiniMA-3B.
22
+
23
+ Completing the compute-performance pareto frontier together with MiniMA-3B and other arts.
24
+
25
+ <img src="./teaser_a.jpg" alt="teaser_a" width="700" />
26
+
27
+ **Standard Benchmarks**
28
+
29
+ |Method|TFLOPs|MMLU (5-shot)|CEval (5-shot)|DROP (3-shot)|HumanEval (0-shot)|BBH (3-shot)|GSM8K (8-shot)|
30
+ |--|--|--|--|--|--|--|--|
31
+ |Mamba-2.8B|4.6E9|25.58|24.74|15.72|7.32|29.37|3.49|
32
+ |ShearedLLaMA-2.7B|0.8E9|26.97|22.88|19.98|4.88|30.48|3.56|
33
+ |BTLM-3B|11.3E9|27.20|26.00|17.84|10.98|30.87|4.55|
34
+ |StableLM-3B|72.0E9|44.75|31.05|22.35|15.85|32.59|10.99|
35
+ |Qwen-1.8B|23.8E9|44.05|54.75|12.97|14.02|30.80|22.97|
36
+ |Phi-2-2.8B|159.9E9|56.74|34.03|30.74|46.95|44.13|55.42|
37
+ |LLaMA-2-7B|84.0E9|46.00|34.40|31.57|12.80|32.02|14.10|
38
+ ||
39
+ |MiniMA-3B|4.0E9|28.51|28.23|22.50|10.98|31.61|8.11|
40
+ |MiniChat-3B|4.0E9|38.40|36.48|22.58|18.29|31.36|29.72|
41
+ |MiniMA-2-3B|13.4E9|40.14|44.65|23.10|14.63|31.43|8.87|
42
+ |MiniChat-2-3B|13.4E9|46.17|43.91|30.26|22.56|34.95|38.13|
43
+
44
+ The following is an example code snippet to use MiniMA-2-3B:
45
+
46
+ ```python
47
+ import torch
48
+
49
+ from transformers import AutoModelForCausalLM, AutoTokenizer
50
+
51
+ # MiniMA
52
+ tokenizer = AutoTokenizer.from_pretrained("GeneZC/MiniMA-2-3B", use_fast=False)
53
+ # GPU.
54
+ model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-2-3B", use_cache=True, device_map="auto", torch_dtype=torch.float16).eval()
55
+ # CPU.
56
+ # model = AutoModelForCausalLM.from_pretrained("GeneZC/MiniMA-2-3B", use_cache=True, device_map="cpu", torch_dtype=torch.float16).eval()
57
+
58
+ prompt = "Question: Sherrie tells the truth. Vernell says Sherrie tells the truth. Alexis says Vernell lies. Michaela says Alexis tells the truth. Elanor says Michaela tells the truth. Does Elanor tell the truth?\nAnswer: No\n\nQuestion: Kristian lies. Sherrie says Kristian lies. Delbert says Sherrie lies. Jerry says Delbert tells the truth. Shalonda says Jerry tells the truth. Does Shalonda tell the truth?\nAnswer: No\n\nQuestion: Vina tells the truth. Helene says Vina lies. Kandi says Helene tells the truth. Jamey says Kandi lies. Ka says Jamey lies. Does Ka tell the truth?\nAnswer: No\n\nQuestion: Christie tells the truth. Ka says Christie tells the truth. Delbert says Ka lies. Leda says Delbert tells the truth. Lorine says Leda tells the truth. Does Lorine tell the truth?\nAnswer:"
59
+ input_ids = tokenizer([prompt]).input_ids
60
+ output_ids = model.generate(
61
+ torch.as_tensor(input_ids).cuda(),
62
+ do_sample=True,
63
+ temperature=0.7,
64
+ max_new_tokens=1024,
65
+ )
66
+ output_ids = output_ids[0][len(input_ids[0]):]
67
+ output = tokenizer.decode(output_ids, skip_special_tokens=True).strip()
68
+ # output: "No"
69
+ ```
70
+
71
+ ## Bibtex
72
+
73
+ ```bibtex
74
+ @article{zhang2023law,
75
+ title={Towards the Law of Capacity Gap in Distilling Language Models},
76
+ author={Zhang, Chen and Song, Dawei and Ye, Zheyu and Gao, Yan},
77
+ year={2023},
78
+ url={https://arxiv.org/abs/2311.07052}
79
+ }
80
+ ```