Tianqiao commited on
Commit
99876cc
1 Parent(s): 4d86bc6

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +81 -81
README.md CHANGED
@@ -1,82 +1,82 @@
1
- ---
2
- license: cc-by-sa-4.0
3
- ---
4
-
5
- # Compare-Answer Model
6
-
7
- Welcome to the repository for the Compare-Answer Model, an innovative tool designed to enhance the accuracy and efficiency of mathematical answer comparison tasks. This model leverages advanced techniques to provide precise comparisons across a wide range of mathematical problems.
8
-
9
- ## Features
10
-
11
- - **High Accuracy**: Utilizes state-of-the-art technology to ensure high reliability in answer comparison.
12
- - **Broad Compatibility**: Supports a variety of mathematical problem types and formats.
13
- - **Easy Integration**: Designed for easy integration with existing systems and workflows.
14
-
15
- ## Installation
16
-
17
- To get started with the Compare-Answer Model, clone this repository and load model with Transformers.
18
-
19
- # Quick Start
20
- To use the model, import it and call the main comparison function with the required parameters:
21
- ```python
22
- from transformers import AutoModelForCausalLM, AutoTokenizer
23
- device = "cuda" # the device to load the model onto
24
- model = AutoModelForCausalLM.from_pretrained(
25
- model_path, torch_dtype="auto", device_map="auto"
26
- )
27
- tokenizer = AutoTokenizer.from_pretrained(model_path)
28
-
29
- def build_user_query(question, pred_answer, answer, base_prompt):
30
- input_text = base_prompt.replace("{{question}}", question)
31
- input_text = input_text.replace("{{pred_step}}", pred_answer)
32
- input_text = input_text.replace("{{answer}}", answer)
33
- input_text = input_text.replace("{{analysis}}", "") # default set analysis to blank, if exist, you can pass in the corresponding parameter.
34
- return input_text
35
-
36
- chat_prompt = """<|im_start|>system
37
- You are a helpful assistant.<|im_end|>
38
- <|im_start|>human
39
- {}<|im_end|>
40
- <|im_start|>gpt
41
- """
42
-
43
- basic_prompt = """## 任务描述\n\n你是一个数学老师,学生提交了题目的解题步骤,你需要参考`题干`,`解析`和`答案`,判断`学生解题步骤`的结果是否正确。忽略`学生解题步骤`中的错误,只关注最后的答案。答案可能出现在`解析`中,也可能出现在`答案`中。\n\n## 输入内容\n\n题干:\n\n```\n{{question}}\n```\n\n解析:\n\n```\n{{analysis}}\n\n```\n\n答案:\n\n```\n{{answer}}\n```\n\n学生解题步骤:\n\n```\n{{pred_step}}\n```\n\n输出:"""
44
- base_prompt = chat_prompt.format(basic_prompt)
45
-
46
- def build_user_query(question, pred_answer, answer, base_prompt):
47
- input_text = base_prompt.replace("{{question}}", question)
48
- input_text = input_text.replace("{{pred_step}}", pred_answer)
49
- input_text = input_text.replace("{{answer}}", answer)
50
- input_text = input_text.replace("{{analysis}}", "") # default set analysis to blank, if exist, you can pass in the corresponding parameter.
51
- return input_text
52
-
53
- prompt = build_user_query("1+1=", "3", "2", base_prompt)
54
-
55
- model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
56
-
57
- generated_ids = model.generate(model_inputs.input_ids, temperature=0, max_new_tokens=16, eos_token_id=100005)
58
- generated_ids = [
59
- output_ids[len(input_ids) :]
60
- for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
61
- ]
62
-
63
- response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0]
64
- ```
65
-
66
- ## Documentation
67
- For more detailed information about the model's API and functionalities, please contact us.
68
-
69
- # Contributing
70
- Contributions to the Compare-Answer Model are welcome! If you have suggestions or improvements, please fork the repository and submit a pull request.
71
-
72
- # License
73
- This project is licensed under the MIT License - see the LICENSE.md file for details.
74
-
75
- # Acknowledgements
76
- Thanks to all contributors who have helped in developing this model.
77
- Special thanks to MathEval for providing the datasets and challenges that inspired this project.
78
-
79
- # Contact
80
- For any inquiries, please reach out via email at liutianqiao1@tal.com or open an issue in this repository.
81
-
82
  Thank you for using or contributing to the Compare-Answer Model!
 
1
+ ---
2
+ license: cc-by-sa-4.0
3
+ ---
4
+
5
+ # Compare-Answer Model
6
+
7
+ Welcome to the repository for the Compare-Answer Model, an innovative tool designed to enhance the accuracy and efficiency of mathematical answer comparison tasks. This model leverages advanced techniques to provide precise comparisons across a wide range of mathematical problems.
8
+
9
+ ## Features
10
+
11
+ - **High Accuracy**: Utilizes state-of-the-art technology to ensure high reliability in answer comparison.
12
+ - **Broad Compatibility**: Supports a variety of mathematical problem types and formats.
13
+ - **Easy Integration**: Designed for easy integration with existing systems and workflows.
14
+
15
+ ## Installation
16
+
17
+ To get started with the Compare-Answer Model, clone this repository and load model with Transformers.
18
+
19
+ # Quick Start
20
+ To use the model, import it and call the main comparison function with the required parameters:
21
+ ```python
22
+ from transformers import AutoModelForCausalLM, AutoTokenizer
23
+ device = "cuda" # the device to load the model onto
24
+ model = AutoModelForCausalLM.from_pretrained(
25
+ model_path, torch_dtype="auto", device_map="auto"
26
+ )
27
+ tokenizer = AutoTokenizer.from_pretrained(model_path)
28
+
29
+ def build_user_query(question, pred_answer, answer, base_prompt):
30
+ input_text = base_prompt.replace("{{question}}", question)
31
+ input_text = input_text.replace("{{pred_step}}", pred_answer)
32
+ input_text = input_text.replace("{{answer}}", answer)
33
+ input_text = input_text.replace("{{analysis}}", "") # default set analysis to blank, if exist, you can pass in the corresponding parameter.
34
+ return input_text
35
+
36
+ chat_prompt = """<|im_start|>system
37
+ You are a helpful assistant.<|im_end|>
38
+ <|im_start|>human
39
+ {}<|im_end|>
40
+ <|im_start|>gpt
41
+ """
42
+
43
+ basic_prompt = """## 任务描述\n\n你是一个数学老师,学生提交了题目的解题步骤,你需要参考`题干`,`解析`和`答案`,判断`学生解题步骤`的结果是否正确。忽略`学生解题步骤`中的错误,只关注最后的答案。答案可能出现在`解析`中,也可能出现在`答案`中。\n\n## 输入内容\n\n题干:\n\n```\n{{question}}\n```\n\n解析:\n\n```\n{{analysis}}\n\n```\n\n答案:\n\n```\n{{answer}}\n```\n\n学生解题步骤:\n\n```\n{{pred_step}}\n```\n\n输出:"""
44
+ base_prompt = chat_prompt.format(basic_prompt)
45
+
46
+ def build_user_query(question, pred_answer, answer, base_prompt):
47
+ input_text = base_prompt.replace("{{question}}", question)
48
+ input_text = input_text.replace("{{pred_step}}", pred_answer)
49
+ input_text = input_text.replace("{{answer}}", answer)
50
+ input_text = input_text.replace("{{analysis}}", "") # default set analysis to blank, if exist, you can pass in the corresponding parameter.
51
+ return input_text
52
+
53
+ prompt = build_user_query("1+1=", "3", "2", base_prompt)
54
+
55
+ model_inputs = tokenizer([prompt], return_tensors="pt").to(device)
56
+
57
+ generated_ids = model.generate(model_inputs.input_ids, temperature=0, max_new_tokens=16, eos_token_id=100005)
58
+ generated_ids = [
59
+ output_ids[len(input_ids) :]
60
+ for input_ids, output_ids in zip(model_inputs.input_ids, generated_ids)
61
+ ]
62
+
63
+ response = tokenizer.batch_decode(generated_ids, skip_special_tokens=False)[0]
64
+ ```
65
+
66
+ ## Documentation
67
+ For more detailed information about the model's API and functionalities, please contact us.
68
+
69
+ # Contributing
70
+ Contributions to the Compare-Answer Model are welcome! If you have suggestions or improvements, please fork the repository and submit a pull request.
71
+
72
+ # License
73
+ This project is licensed under the MIT License - see the LICENSE.md file for details.
74
+
75
+ # Acknowledgements
76
+ Thanks to all contributors who have helped in developing this model.
77
+ Special thanks to MathEval for providing the datasets and challenges that inspired this project.
78
+
79
+ # Contact
80
+ For any inquiries, please reach out via email at liutianqiao1@tal.com or open an issue in this repository.
81
+
82
  Thank you for using or contributing to the Compare-Answer Model!