MCQStudentBertSum / README.md
tommymarto's picture
Update README.md
2e820e6 verified
---
library_name: transformers
license: mit
language:
- de
---
# MCQStudentBert Model Card
MCQStudentBertCat and MCQStudentBertSum are versatile BERT-based models fine-tuned from MCQBert on student interactions (question + answer textual pairs) to predict student answers to new questions within Intelligent Tutoring Systems (ITS). Using [MCQBert](https://huggingface.co/epfl-ml4ed/MCQBert) as a base model, MCQStudentBert is able to understand and process educational language in German, especially in grammar teaching, where sentences contain mistakes. The model processes both the text of the questions and the answer, along with past student interaction via student embeddings, to predict if the answer will be chosen by the student in an MCQ setting.
It is trained on one objective: given a question and answer pair, and a student interaction embedding vector, predict whether the answer has been chosen by the student or not.
MCQStudentBertCat uses a concatenation strategy to integrate student embedding before the classifier layers, while MCQStudentBertSum sums the student embedding and the question-answer embedding at the input of the BERT model.
### Model Sources
- **Repository:** [https://github.com/epfl-ml4ed/answer-forecasting](https://github.com/epfl-ml4ed/answer-forecasting)
- **Paper:** [https://arxiv.org/abs/2405.20079](https://arxiv.org/abs/2405.20079)
### Direct Use
MCQStudentBert is primarily intended to predict what a student will answer to a given question in Intelligent Tutoring Systems (ITS). Given a question and answer pair and an interaction embedding vector, it performs a binary classification to decide whether the student will choose that answer or not.
## Bias, Risks, and Limitations
While MCQStudentBert is effective, it has some limitations:
It is primarily trained on German language MCQs and may not generalize well to other languages or subjects without further fine-tuning.
The model may not capture all nuances of student learning behavior, particularly in diverse educational contexts.
Privacy: No personally identifiable information has been used in any training phase.
## How to Use MCQBert
```python
import torch
import pandas as pd
from transformers import AutoModelForCausalLM, AutoModel, AutoTokenizer
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
token = my_hf_token
# load Mistral 7B Instruct to be used as the embedding model
tokenizer = AutoTokenizer.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", token=token)
model = AutoModelForCausalLM.from_pretrained("mistralai/Mistral-7B-Instruct-v0.1", torch_dtype=torch.float16, token=token).to(device)
# load MCQStudentBert
model_bert = AutoModel.from_pretrained("epfl-ml4ed/MCQStudentBertSum", trust_remote_code=True, token=token).to(device)
tokenizer_bert = AutoTokenizer.from_pretrained("dbmdz/bert-base-german-uncased")
with torch.no_grad():
# create interactions list and use them to create the student embedding
interactions = pd.DataFrame([
{"question": question_text, "choice": student_answer},
...
])
joined_interactions = f"{tokenizer.sep_token}".join(interactions.apply(lambda x: f"Q: {x['question']}{tokenizer.sep_token}A: {x['choice']}", axis=1).values)
embeddings = model(
**tokenizer(joined_interactions, return_tensors="pt", truncation=True, max_length=4096).to(device),
output_hidden_states=True
).hidden_states[-1].squeeze(0).mean(0)
# use MCQStudentBert for Student Answer Forecasting
output = torch.nn.functional.sigmoid(
model_bert(
tokenizer_bert(last_question, return_tensors="pt").input_ids.to(device),
embeddings.to(torch.float32)
).cpu()
).item() > 0.5
print(output)
```
## Training Details
The model was trained on 110k student interaction sequences for 3 epochs with a batch size of 16. The optimizer used is AdamW with learning rate = 1.75e-5, \\(\beta_{1} = 0.9\\) and \\(\beta_{2} = 0.999\\), and a weight decay of 0.01
## Citation
If you find this useful in your work, please cite our paper
```
@misc{gado2024student,
title={Student Answer Forecasting: Transformer-Driven Answer Choice Prediction for Language Learning},
author={Elena Grazia Gado and Tommaso Martorella and Luca Zunino and Paola Mejia-Domenzain and Vinitra Swamy and Jibril Frej and Tanja Käser},
year={2024},
eprint={2405.20079},
archivePrefix={arXiv},
}
```
```
Gado, E., Martorella, T., Zunino, L., Mejia-Domenzain, P., Swamy, V., Frej, J., Käser, T. (2024).
Student Answer Forecasting: Transformer-Driven Answer Choice Prediction for Language Learning.
In: Proceedings of the Conference on Educational Data Mining (EDM 2024).
```