tommymarto
commited on
Commit
•
acb77f2
1
Parent(s):
11d48d4
Update README.md
Browse files
README.md
CHANGED
@@ -1,201 +1,97 @@
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
-
|
|
|
|
|
4 |
---
|
5 |
|
6 |
-
# Model Card
|
7 |
|
8 |
-
|
|
|
|
|
9 |
|
|
|
10 |
|
11 |
-
|
12 |
-
|
13 |
-
|
14 |
-
### Model Description
|
15 |
-
|
16 |
-
<!-- Provide a longer summary of what this model is. -->
|
17 |
-
|
18 |
-
This is the model card of a 🤗 transformers model that has been pushed on the Hub. This model card has been automatically generated.
|
19 |
-
|
20 |
-
- **Developed by:** [More Information Needed]
|
21 |
-
- **Funded by [optional]:** [More Information Needed]
|
22 |
-
- **Shared by [optional]:** [More Information Needed]
|
23 |
-
- **Model type:** [More Information Needed]
|
24 |
-
- **Language(s) (NLP):** [More Information Needed]
|
25 |
-
- **License:** [More Information Needed]
|
26 |
-
- **Finetuned from model [optional]:** [More Information Needed]
|
27 |
-
|
28 |
-
### Model Sources [optional]
|
29 |
-
|
30 |
-
<!-- Provide the basic links for the model. -->
|
31 |
-
|
32 |
-
- **Repository:** [More Information Needed]
|
33 |
-
- **Paper [optional]:** [More Information Needed]
|
34 |
-
- **Demo [optional]:** [More Information Needed]
|
35 |
-
|
36 |
-
## Uses
|
37 |
-
|
38 |
-
<!-- Address questions around how the model is intended to be used, including the foreseeable users of the model and those affected by the model. -->
|
39 |
|
40 |
### Direct Use
|
41 |
|
42 |
-
|
43 |
-
|
44 |
-
[More Information Needed]
|
45 |
-
|
46 |
-
### Downstream Use [optional]
|
47 |
-
|
48 |
-
<!-- This section is for the model use when fine-tuned for a task, or when plugged into a larger ecosystem/app -->
|
49 |
-
|
50 |
-
[More Information Needed]
|
51 |
-
|
52 |
-
### Out-of-Scope Use
|
53 |
-
|
54 |
-
<!-- This section addresses misuse, malicious use, and uses that the model will not work well for. -->
|
55 |
-
|
56 |
-
[More Information Needed]
|
57 |
|
58 |
## Bias, Risks, and Limitations
|
59 |
|
60 |
-
|
61 |
-
|
62 |
-
[More Information Needed]
|
63 |
-
|
64 |
-
### Recommendations
|
65 |
-
|
66 |
-
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
|
67 |
-
|
68 |
-
Users (both direct and downstream) should be made aware of the risks, biases and limitations of the model. More information needed for further recommendations.
|
69 |
-
|
70 |
-
## How to Get Started with the Model
|
71 |
-
|
72 |
-
Use the code below to get started with the model.
|
73 |
-
|
74 |
-
[More Information Needed]
|
75 |
-
|
76 |
-
## Training Details
|
77 |
-
|
78 |
-
### Training Data
|
79 |
-
|
80 |
-
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
|
81 |
-
|
82 |
-
[More Information Needed]
|
83 |
-
|
84 |
-
### Training Procedure
|
85 |
-
|
86 |
-
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
|
87 |
-
|
88 |
-
#### Preprocessing [optional]
|
89 |
-
|
90 |
-
[More Information Needed]
|
91 |
-
|
92 |
-
|
93 |
-
#### Training Hyperparameters
|
94 |
-
|
95 |
-
- **Training regime:** [More Information Needed] <!--fp32, fp16 mixed precision, bf16 mixed precision, bf16 non-mixed precision, fp16 non-mixed precision, fp8 mixed precision -->
|
96 |
|
97 |
-
|
|
|
98 |
|
99 |
-
|
100 |
|
101 |
-
|
102 |
|
103 |
-
|
|
|
|
|
|
|
104 |
|
105 |
-
|
|
|
106 |
|
107 |
-
|
|
|
|
|
108 |
|
109 |
-
|
|
|
|
|
110 |
|
111 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
112 |
|
113 |
-
|
|
|
|
|
|
|
114 |
|
115 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
116 |
|
117 |
-
|
|
|
118 |
|
119 |
-
|
120 |
-
|
121 |
-
#### Metrics
|
122 |
-
|
123 |
-
<!-- These are the evaluation metrics being used, ideally with a description of why. -->
|
124 |
-
|
125 |
-
[More Information Needed]
|
126 |
-
|
127 |
-
### Results
|
128 |
-
|
129 |
-
[More Information Needed]
|
130 |
-
|
131 |
-
#### Summary
|
132 |
-
|
133 |
-
|
134 |
-
|
135 |
-
## Model Examination [optional]
|
136 |
-
|
137 |
-
<!-- Relevant interpretability work for the model goes here -->
|
138 |
-
|
139 |
-
[More Information Needed]
|
140 |
-
|
141 |
-
## Environmental Impact
|
142 |
-
|
143 |
-
<!-- Total emissions (in grams of CO2eq) and additional considerations, such as electricity usage, go here. Edit the suggested text below accordingly -->
|
144 |
-
|
145 |
-
Carbon emissions can be estimated using the [Machine Learning Impact calculator](https://mlco2.github.io/impact#compute) presented in [Lacoste et al. (2019)](https://arxiv.org/abs/1910.09700).
|
146 |
-
|
147 |
-
- **Hardware Type:** [More Information Needed]
|
148 |
-
- **Hours used:** [More Information Needed]
|
149 |
-
- **Cloud Provider:** [More Information Needed]
|
150 |
-
- **Compute Region:** [More Information Needed]
|
151 |
-
- **Carbon Emitted:** [More Information Needed]
|
152 |
-
|
153 |
-
## Technical Specifications [optional]
|
154 |
-
|
155 |
-
### Model Architecture and Objective
|
156 |
-
|
157 |
-
[More Information Needed]
|
158 |
-
|
159 |
-
### Compute Infrastructure
|
160 |
-
|
161 |
-
[More Information Needed]
|
162 |
-
|
163 |
-
#### Hardware
|
164 |
-
|
165 |
-
[More Information Needed]
|
166 |
-
|
167 |
-
#### Software
|
168 |
-
|
169 |
-
[More Information Needed]
|
170 |
-
|
171 |
-
## Citation [optional]
|
172 |
-
|
173 |
-
<!-- If there is a paper or blog post introducing the model, the APA and Bibtex information for that should go in this section. -->
|
174 |
-
|
175 |
-
**BibTeX:**
|
176 |
-
|
177 |
-
[More Information Needed]
|
178 |
-
|
179 |
-
**APA:**
|
180 |
-
|
181 |
-
[More Information Needed]
|
182 |
-
|
183 |
-
## Glossary [optional]
|
184 |
-
|
185 |
-
<!-- If relevant, include terms and calculations in this section that can help readers understand the model or model card. -->
|
186 |
-
|
187 |
-
[More Information Needed]
|
188 |
-
|
189 |
-
## More Information [optional]
|
190 |
-
|
191 |
-
[More Information Needed]
|
192 |
|
193 |
-
|
194 |
|
195 |
-
[More Information Needed]
|
196 |
|
197 |
-
##
|
198 |
|
199 |
-
|
200 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
201 |
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
library_name: transformers
|
3 |
+
license: mit
|
4 |
+
language:
|
5 |
+
- de
|
6 |
---
|
7 |
|
8 |
+
# MCQStudentBert Model Card
|
9 |
|
10 |
+
MCQStudentBertCat and MCQStudentBertSum are versatile BERT-based models fine-tuned from MCQBert on student interactions (question + answer textual pairs) to predict student answers to new questions within Intelligent Tutoring Systems (ITS). Using [MCQBert](https://huggingface.co/epfl-ml4ed/MCQBert) as a base model, MCQStudentBert is able to understand and process educational language in German, especially in grammar teaching, where sentences contain mistakes. The model processes both the text of the questions and the answer, along with past student interaction via student embeddings, to predict if the answer will be chosen by the student in an MCQ setting.
|
11 |
+
It is trained on one objective: given a question and answer pair, and a student interaction embedding vector, predict whether the answer has been chosen by the student or not.
|
12 |
+
MCQStudentBertCat uses a concatenation strategy to integrate student embedding before the classifier layers, while MCQStudentBertSum sums the student embedding and the question-answer embedding at the input of the BERT model.
|
13 |
|
14 |
+
### Model Sources
|
15 |
|
16 |
+
- **Repository:** [https://github.com/epfl-ml4ed/answer-forecasting](https://github.com/epfl-ml4ed/answer-forecasting)
|
17 |
+
- **Paper:** [https://arxiv.org/abs/2405.20079](https://arxiv.org/abs/2405.20079)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
18 |
|
19 |
### Direct Use
|
20 |
|
21 |
+
MCQStudentBert is primarily intended to predict what a student will answer to a given question in Intelligent Tutoring Systems (ITS). Given a question and answer pair and an interaction embedding vector, it performs a binary classification to decide whether the student will choose that answer or not.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
|
23 |
## Bias, Risks, and Limitations
|
24 |
|
25 |
+
While MCQStudentBert is effective, it has some limitations:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
26 |
|
27 |
+
It is primarily trained on German language MCQs and may not generalize well to other languages or subjects without further fine-tuning.
|
28 |
+
The model may not capture all nuances of student learning behavior, particularly in diverse educational contexts.
|
29 |
|
30 |
+
Privacy: No personally identifiable information has been used in any training phase.
|
31 |
|
32 |
+
## How to Use MCQBert
|
33 |
|
34 |
+
```python
|
35 |
+
import torch
|
36 |
+
import pandas as pd
|
37 |
+
from transformers import AutoModelForCausalLM, AutoModel, AutoTokenizer
|
38 |
|
39 |
+
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
|
40 |
+
token = my_hf_token
|
41 |
|
42 |
+
# load Mistral 7B Instruct to be used as the embedding model
|
43 |
+
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", token=token)
|
44 |
+
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", torch_dtype=torch.float16, token=token).to(device)
|
45 |
|
46 |
+
# load MCQStudentBert
|
47 |
+
model_bert = AutoModel.from_pretrained("epfl-ml4ed/MCQStudentBertCat", trust_remote_code=True, token=token).to(device)
|
48 |
+
tokenizer_bert = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-uncased")
|
49 |
|
50 |
+
with torch.no_grad():
|
51 |
+
# create interactions list and use them to create the student embedding
|
52 |
+
interactions = pd.DataFrame([
|
53 |
+
{"question": question_text, "choice": student_answer},
|
54 |
+
...
|
55 |
+
])
|
56 |
+
joined_interactions = f"{tokenizer.sep_token}".join(interactions.apply(lambda x: f"Q: {x['question']}{tokenizer.sep_token}A: {x['choice']}", axis=1).values)
|
57 |
|
58 |
+
embeddings = model(
|
59 |
+
**tokenizer(joined_interactions, return_tensors="pt", truncation=True, max_length=4096).to(device),
|
60 |
+
output_hidden_states=True
|
61 |
+
).hidden_states[-1].squeeze(0).mean(0)
|
62 |
|
63 |
+
# use MCQStudentBert for Student Answer Forecasting
|
64 |
+
output = torch.nn.functional.sigmoid(
|
65 |
+
model_bert(
|
66 |
+
tokenizer_bert(last_question, return_tensors="pt").input_ids.to(device),
|
67 |
+
embeddings.to(torch.float32)
|
68 |
+
).cpu()
|
69 |
+
).item() > 0.5
|
70 |
|
71 |
+
print(output)
|
72 |
+
```
|
73 |
|
74 |
+
## Training Details
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
75 |
|
76 |
+
The model was trained on 110k student interaction sequences for 3 epochs with a batch size of 16. The optimizer used is AdamW with learning rate = 1.75e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), and a weight decay of 0.01
|
77 |
|
|
|
78 |
|
79 |
+
## Citation
|
80 |
|
81 |
+
If you find this useful in your work, please cite our paper
|
82 |
|
83 |
+
```
|
84 |
+
@misc{gado2024student,
|
85 |
+
title={Student Answer Forecasting: Transformer-Driven Answer Choice Prediction for Language Learning},
|
86 |
+
author={Elena Grazia Gado and Tommaso Martorella and Luca Zunino and Paola Mejia-Domenzain and Vinitra Swamy and Jibril Frej and Tanja Käser},
|
87 |
+
year={2024},
|
88 |
+
eprint={2405.20079},
|
89 |
+
archivePrefix={arXiv},
|
90 |
+
}
|
91 |
+
```
|
92 |
|
93 |
+
```
|
94 |
+
Gado, E., Martorella, T., Zunino, L., Mejia-Domenzain, P., Swamy, V., Frej, J., Käser, T. (2024).
|
95 |
+
Student Answer Forecasting: Transformer-Driven Answer Choice Prediction for Language Learning.
|
96 |
+
In: Proceedings of the Conference on Educational Data Mining (EDM 2024).
|
97 |
+
```
|