Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,197 @@
|
|
1 |
---
|
2 |
license: llama2
|
3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
license: llama2
|
3 |
---
|
4 |
+
# Xwin-Math
|
5 |
+
|
6 |
+
<p align="center">
|
7 |
+
<a href="https://github.com/Xwin-LM/Xwin-LM/Xwin-Math"><img src="https://img.shields.io/badge/GitHub-yellow.svg?style=social&logo=github"></a>
|
8 |
+
<a href="https://huggingface.co/Xwin-LM"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Models-blue"></a>
|
9 |
+
</p>
|
10 |
+
|
11 |
+
Xwin-Math is a series of powerful SFT LLMs for math problem based on LLaMA-2.
|
12 |
+
|
13 |
+
|
14 |
+
## π₯ News
|
15 |
+
- π₯ [Nov, 2023] The [Xwin-Math-70B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0) model achieves **31.8 pass@1 on the MATH benchmark** and **87.0 pass@1 on the GSM8K benchmark**. This performance places it first amongst all open-source models!
|
16 |
+
- π₯ [Nov, 2023] The [Xwin-Math-7B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0) and [Xwin-Math-13B-V1.0](https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0) models achieve **66.6 and 76.2 pass@1 on the GSM8K benchmark**, ranking as top-1 among all LLaMA-2 based 7B and 13B open-source models, respectively!
|
17 |
+
|
18 |
+
|
19 |
+
## β¨ Model Card
|
20 |
+
| Model | GSM8K | MATH | Checkpoint | License |
|
21 |
+
|:-:|:-:|:-:|:-:|:-:|
|
22 |
+
|Xwin-Math-7B-V1.0 | 66.6 | 17.4 | π€ <a href="https://huggingface.co/Xwin-LM/Xwin-Math-7B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|
23 |
+
|Xwin-Math-13B-V1.0| 76.2 | 21.7 | π€ <a href="https://huggingface.co/Xwin-LM/Xwin-Math-13B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|
24 |
+
|Xwin-Math-70B-V1.0| 87.0 | 31.8 | π€ <a href="https://huggingface.co/Xwin-LM/Xwin-Math-70B-V1.0" target="_blank">HF Link</a> | <a href="https://ai.meta.com/resources/models-and-libraries/llama-downloads/" target="_blank">Llama 2 License|
|
25 |
+
|
26 |
+
## π Benchmarks
|
27 |
+
### Xwin-Math performance on [MATH](https://github.com/hendrycks/math) and [GSM8K](https://github.com/openai/grade-school-math).
|
28 |
+
|
29 |
+
Xwin-Math-70B-V1.0 has achieved **31.8% on MATH** and **87.0% on GSM8K**. These scores are **5.3** and **3.1** points higher, respectively, than the previous state-of-the-art open-source MetaMath and LEMAv1 model.
|
30 |
+
|
31 |
+
|
32 |
+
| **Model** |**MATH (Our test)** | **GSM8K (Our test)** |
|
33 |
+
|:-:|:-:|:-:|
|
34 |
+
| GPT-4 (zero-shot) | 52.4 | 94.8 |
|
35 |
+
| GPT-35-Turbo (8-shot)| 37.1 | 81.0 |
|
36 |
+
| |
|
37 |
+
| WizardMath-70B | 23.9 | 81.1 |
|
38 |
+
| MAmmoTH-70B | 20.8 | 72.6 |
|
39 |
+
| MetaMath-70B | 26.5 | 82.0 |
|
40 |
+
| LEMAv1-70B | 25.9 | 83.9 |
|
41 |
+
|**Xwin-Math-70B-V1.0** |**31.8**|**87.0**|
|
42 |
+
| |
|
43 |
+
| WizardMath-13B | 15.0 | 63.7 |
|
44 |
+
| MAmmoTH-13B | 12.3 | 56.2 |
|
45 |
+
| MetaMath-13B | 22.7 | 70.9 |
|
46 |
+
| LEMAv1-13B | 13.6 | 65.0 |
|
47 |
+
|**Xwin-Math-13B-V1.0** | 21.7 | 76.2 |
|
48 |
+
| |
|
49 |
+
| WizardMath-7B | 10.9 | 55.0 |
|
50 |
+
| MAmmoTH-7B | 9.6 | 50.2 |
|
51 |
+
| MetaMath-7B | 20.1 | 66.6 |
|
52 |
+
| LEMAv1-7B | 10.0 | 54.7 |
|
53 |
+
|**Xwin-Math-7B-V1.0** | 17.4 | 66.6 |
|
54 |
+
|
55 |
+
We obtain these results using our flexible evaluation strategy. Due to differences in environment and hardware, the numbers may be different from the reported results, but we ensure that the evaluation is as accurate and fair as possible.
|
56 |
+
|
57 |
+
### Xwin-Math performance on other math benchmarks.
|
58 |
+
|
59 |
+
Our 70B model shows strong mathematical synthesis capabilities among all open-sourced models. Also note that our model even approaches or surpasses the performance of GPT-35-Turbo on some benchmarks.
|
60 |
+
|
61 |
+
| **Model** | SVAMP | ASDiv | NumGlue | Algebra | MAWPS | **Average** |
|
62 |
+
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|
63 |
+
| GPT-35-Turbo (8-shot)| 80.6 | 84.1 | 81.8 | 90.5 | 91.7 | 85.7 |
|
64 |
+
| |
|
65 |
+
| WizardMath-70B | 80.2 | 75.8 | 71.4 | 64.0 | 74.9 | 73.3 |
|
66 |
+
| MAmmoTH-70B | 71.2 | 73.9 | 62.7 | 58.1 | 72.2 | 67.6 |
|
67 |
+
| MetaMath-70B | 85.8 | 81.1 | 77.5 | 79.7 | 81.4 | 81.1 |
|
68 |
+
| LEMAv1-70B-MATH * | 81.6 | 77.1 | 72.1 | 69.4 | 81.8 | 76.5 |
|
69 |
+
|**Xwin-Math-70B-V1.0** | 84.0 | 84.1 | 81.3 | 78.4 | 90.8 | 83.7 |
|
70 |
+
|
71 |
+
\* LEMAv1 has two models, and we report the better LEMAv1-70B-MATH model in these benchmarks.
|
72 |
+
|
73 |
+
## π¨ Evaluation
|
74 |
+
In order to evaluate a model's mathematical capabilities more flexibly and ensure a fair comparison of results, particularly for the MATH benchmark, we have developed a new evaluation tool. We have also assessed the pass@1 results of recent models on MATH and GSM8K benchmarks, which provides more accurate results.
|
75 |
+
|
76 |
+
We hope this toolkit can benefit open-source community by providing more accurate insights and conclusions. For a deeper understanding of our evaluation tool and methods, please visit [here](https://github.com/Xwin-LM/Xwin-LM/tree/main/Xwin-Math/eval)
|
77 |
+
|
78 |
+
* "Report" refers to the accuracy stated in the original papers.
|
79 |
+
* "Repro" indicates the results is reproduced by generating responses and evaluating them using the respective open-source models and scripts.
|
80 |
+
* "Strict" and "Flex" denote the results we achieved by employing our two strategies to extract answer and evaluate the same responses as "Repro".
|
81 |
+
|
82 |
+
| Model | MATH <br> (Report) <br/> |MATH <br> (Repro) <br/> | MATH <br> (Strict) <br/> |MATH <br> (Flex) <br/> | GSM8K <br> (Report) <br/> |GSM8K <br> (Repro) <br/>| GSM8K <br> (Strict) <br/> | GSM8K <br> (Report) <br/> |
|
83 |
+
|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
|
84 |
+
| GPT-35-Turbo (8-shot)| 34.1 | - | 23.8 | 37.1 | 80.8 | - | 77.9 | 81.0 |
|
85 |
+
| |
|
86 |
+
| WizardMath-70B | 22.7 | 23.0 | 23.9 | 23.9 | 81.6 | 81.4 | 81.1 | 81.1 |
|
87 |
+
| MAmmoTH-70B | 21.1 | 18.0 | 20.0 | 20.8 | 72.4 | 72.6 | 72.6 | 72.6 |
|
88 |
+
| MetaMath-70B | 26.6 | 25.9 | 26.3 | 26.5 | 82.3 | 82.3 | 82.0 | 82.0 |
|
89 |
+
|**Xwin-Math-70B-V1.0** | - | - |**31.8**|**31.8**| - | - |**87.0**|**87.0**|
|
90 |
+
| |
|
91 |
+
| WizardMath-13B | 14.0 | 14.2 | 14.9 | 15.0 | 63.9 | 63.9 | 63.7 | 63.7 |
|
92 |
+
| MAmmoTH-13B | 12.9 | 10.8 | 11.8 | 12.3 | 56.3 | 56.2 | 56.1 | 56.2 |
|
93 |
+
| MetaMath-13B | 22.4 | 22.5 | 22.6 | 22.7 | 72.3 | 71.0 | 70.9 | 70.9 |
|
94 |
+
|**Xwin-Math-13B-V1.0** | - | - | 21.6 | 21.7 | - | - | 76.2 | 76.2 |
|
95 |
+
| |
|
96 |
+
| WizardMath-7B | 10.7 | 10.3 | 10.9 | 10.9 | 54.9 | 55.2 | 55.0 | 55.0 |
|
97 |
+
| MAmmoTH-7B | 10.4 | 8.6 | 9.1 | 9.6 | 50.5 | 50.2 | 50.2 | 50.2 |
|
98 |
+
| MetaMath-7B | 19.8 | 19.6 | 19.9 | 20.1 | 66.5 | 66.6 | 66.6 | 66.6 |
|
99 |
+
|**Xwin-Math-7B-V1.0** | - | - | 17.3 | 17.4 | - | - | 66.6 | 66.6 |
|
100 |
+
|
101 |
+
### Installation
|
102 |
+
|
103 |
+
Before you start, please install the requirements.
|
104 |
+
|
105 |
+
```bash
|
106 |
+
pip install -r requirements.txt
|
107 |
+
```
|
108 |
+
|
109 |
+
We tested our result using `python 3.8` and `cuda 11.8`. We recommend you use docker.
|
110 |
+
```bash
|
111 |
+
docker run --gpus all -it --rm --ipc=host superbench/dev:cuda11.8
|
112 |
+
```
|
113 |
+
|
114 |
+
### Generate
|
115 |
+
|
116 |
+
To generate the model's responses, you can use the `generate.py` script. Please be aware that generating responses is separate from verifying their correctness. After that, we will then check for their correctness.
|
117 |
+
|
118 |
+
For the generation process, we use the Vicuna-v1.1 system prompt with chain-of-thought and format instruction. We also employ a greedy decoding strategy and set the maximum sequence length to 2048.
|
119 |
+
```
|
120 |
+
"A chat between a curious user and an artificial intelligence assistant. The assistant gives helpful, detailed, and polite answers to the user's questions. USER: {instruction} Give your solution in detail. In the end, write your final answer in the format of 'The answer is: <ANSWER>.'. ASSISTANT: "
|
121 |
+
```
|
122 |
+
|
123 |
+
Here is an simple example to generate using [vLLM](https://docs.vllm.ai/en/latest/).
|
124 |
+
```bash
|
125 |
+
cd eval
|
126 |
+
|
127 |
+
python generate.py --dataset_path dataset/gsm8k.json --model_path path/to/your/model --tensor_parallel_size 4
|
128 |
+
```
|
129 |
+
By default the results will be output to the `eval/response`, using the prompt `eval/prompt/xwin_math.json`. If you wish to change the output path or use a different prompt
|
130 |
+
```bash
|
131 |
+
python generate.py --dataset_path dataset/gsm8k.json --model_path path/to/your/model --tensor_parallel_size 4 --output_path /your/path --prompt_path /your/path
|
132 |
+
```
|
133 |
+
|
134 |
+
|
135 |
+
We provide some datasets (in `eval/dataset`):
|
136 |
+
- `gsm8k.json`: GSM8K.
|
137 |
+
- `math.json`: MATH.
|
138 |
+
- `combination.json`: A combination of many benchmarks, can evaluate the OOD capability of the model.
|
139 |
+
|
140 |
+
If you wan't to use your own datasets, please format your dataset like this.
|
141 |
+
|
142 |
+
```jsonc
|
143 |
+
[
|
144 |
+
{
|
145 |
+
"question": "Janet\u2019s ducks lay 16 eggs per day. She eats three for breakfast every morning and bakes muffins for her friends every day with four. She sells the remainder at the farmers' market daily for $2 per fresh duck egg. How much in dollars does she make every day at the farmers' market?",
|
146 |
+
"answer": "18",
|
147 |
+
"type": "GSM8K",
|
148 |
+
"subtype": "",
|
149 |
+
"level": 0,
|
150 |
+
},
|
151 |
+
// ... more data items
|
152 |
+
]
|
153 |
+
```
|
154 |
+
|
155 |
+
|
156 |
+
### Evaluate
|
157 |
+
|
158 |
+
To verify the accuracy of the answers after generation, you can use the `check.py script.
|
159 |
+
|
160 |
+
Here is an simple example
|
161 |
+
```bash
|
162 |
+
cd eval
|
163 |
+
|
164 |
+
python eval.py /path/to/model/response
|
165 |
+
```
|
166 |
+
The result will be saved in `eval/evaluation`
|
167 |
+
|
168 |
+
If you do not want to save the results or want to change the save path
|
169 |
+
```bash
|
170 |
+
python eval.py --data_path /path/to/model/response --save_path /path/to/save --save_result True
|
171 |
+
```
|
172 |
+
|
173 |
+
Once you run the script, the terminal will display the output as a table. This table will show the number of instances for each benchmark and the corresponding accuracy. Here is a hypothetical example of what the output might look like:
|
174 |
+
|
175 |
+
||Type|Subtype|Level|Correct|Incorrect|Total|Accuracy|
|
176 |
+
|---|---|---|---|---|---|---|---|
|
177 |
+
|0|MAWPS|addsub|0|359|33|392|0.915816|
|
178 |
+
|1|MAWPS|multiarith|0|586|14|600|0.976667|
|
179 |
+
|...|
|
180 |
+
|
181 |
+
|
182 |
+
## Citation
|
183 |
+
Please consider citing our work if you use the data or code in this repo.
|
184 |
+
```
|
185 |
+
@software{xwin-math,
|
186 |
+
title = {Xwin-Math},
|
187 |
+
author = {Xwin-Math Team},
|
188 |
+
url = {https://github.com/Xwin-LM/Xwin-LM/Xwin-Math},
|
189 |
+
version = {pre-release},
|
190 |
+
year = {2023},
|
191 |
+
month = {11},
|
192 |
+
}
|
193 |
+
```
|
194 |
+
|
195 |
+
## Acknowledgements
|
196 |
+
|
197 |
+
Thanks to [Llama 2](https://ai.meta.com/llama/), [FastChat](https://github.com/lm-sys/FastChat), and [vLLM](https://github.com/vllm-project/vllm).
|