|
--- |
|
license: cc-by-nc-4.0 |
|
task_categories: |
|
- text-generation |
|
language: |
|
- en |
|
tags: |
|
- math |
|
pretty_name: bridge |
|
size_categories: |
|
- n<1K |
|
--- |
|
TLDR: This dataset is a real-world math tutoring dataset from the NAACL 2024 paper ``Bridging the Novice-Expert Gap via Models of Decision-Making: A Case Study on Remediating Math Mistakes''. |
|
The dataset targets scenarios where the student makes a math mistake. |
|
|
|
- `c_h` is the conversation history |
|
- `c_r` is the original tutor's response |
|
- `c_r_` is the experienced teacher's response |
|
|
|
Optionally, there is other interesting metadata from our Bridge method: |
|
- `e` is the student error type that the experienced teacher identified |
|
- `z_what` is the strategy that the experienced teacher wants to use in their response |
|
- `z_why` is the intention that the experienced teacher wants to achieve in their response |
|
|
|
With the metadata, you can replicate our model of teacher's internal decision thoughts: |
|
|
|
![Main Figure](fig1.png) |
|
|
|
# ๐ Bridging the Novice-Expert Gap via Models of Decision-Making |
|
|
|
[Paper Link](https://arxiv.org/abs/2310.10648), [Code Link](https://github.com/rosewang2008/bridge/) |
|
|
|
**NAACL 2024** |
|
|
|
**Title:** Bridging the Novice-Expert Gap via Models of Decision-Making: A Case Study on Remediating Math Mistakes |
|
|
|
**Authors:** Rose E. Wang, Qingyang Zhang, Carly Robinson, Susanna Loeb, Dorottya Demszky |
|
|
|
**Main Idea: We contribute Bridge ๐, a method that uses cognitive task analysis to translate an expert's implicit thought process into an explicit decision-making model**. |
|
|
|
Scaling high-quality tutoring remains a major challenge in education. |
|
Due to growing demand, many platforms employ novice tutors who, unlike experienced educators, struggle to address student mistakes and thus fail to seize prime learning opportunities. |
|
Our work explores the potential of large language models (LLMs) to close the novice-expert knowledge gap in remediating math mistakes. |
|
**Bridge ๐ leverages cognitive task analysis to model an expert's internal decision-making in remediation: Experts internally identify (A) the student's error, (B) a remediation strategy, and (C) their intention before generating a response.** |
|
We construct a dataset of 700 real tutoring conversations, annotated by experts with their decisions. |
|
We evaluate state-of-the-art LLMs on our dataset and find that the expert's decision-making model is critical for LLMs to close the gap: |
|
responses from GPT4 with expert decisions (e.g., ``simplify the problem'') are +76% more preferred than without. |
|
Additionally, context-sensitive decisions are critical to closing pedagogical gaps: |
|
random decisions decrease GPT4's response quality by -97% than expert decisions. |
|
Our work shows the potential of embedding expert thought processes in LLM generations to enhance their capability to bridge novice-expert knowledge gaps. |
|
|
|
|
|
For more information about how the dataset is curated, please check out our codebase: https://github.com/rosewang2008/bridge/, and paper: https://arxiv.org/pdf/2310.10648 |