Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -4,8 +4,12 @@ task_categories:
|
|
4 |
---
|
5 |
# Dataset of (Du et al., 2022)
|
6 |
|
|
|
|
|
7 |
>Understanding causality has vital importance for various Natural Language Processing (NLP) applications. Beyond the labeled instances, conceptual explanations of the causality can provide deep understanding of the causal fact to facilitate the causal reasoning process. However, such explanation information still remains absent in existing causal reasoning resources. In this paper, we fill this gap by presenting a human-annotated explainable CAusal REasoning dataset (e-CARE), which contains over 20K causal reasoning questions, together with natural language formed explanations of the causal questions. Experimental results show that generating valid explanations for causal facts still remains especially challenging for the state-of-the-art models, and the explanation information can be helpful for promoting the accuracy and stability of causal reasoning models.
|
8 |
|
|
|
|
|
9 |
Please note that the original dataset has been modified so that the variable names match with those in the COPA dataset (Roemmele et al., 2011). In addition, only the training and the development sets are [publicly available](https://github.com/waste-wood/e-care).
|
10 |
|
11 |
## References
|
|
|
4 |
---
|
5 |
# Dataset of (Du et al., 2022)
|
6 |
|
7 |
+
## Abstract
|
8 |
+
|
9 |
>Understanding causality has vital importance for various Natural Language Processing (NLP) applications. Beyond the labeled instances, conceptual explanations of the causality can provide deep understanding of the causal fact to facilitate the causal reasoning process. However, such explanation information still remains absent in existing causal reasoning resources. In this paper, we fill this gap by presenting a human-annotated explainable CAusal REasoning dataset (e-CARE), which contains over 20K causal reasoning questions, together with natural language formed explanations of the causal questions. Experimental results show that generating valid explanations for causal facts still remains especially challenging for the state-of-the-art models, and the explanation information can be helpful for promoting the accuracy and stability of causal reasoning models.
|
10 |
|
11 |
+
## Notes
|
12 |
+
|
13 |
Please note that the original dataset has been modified so that the variable names match with those in the COPA dataset (Roemmele et al., 2011). In addition, only the training and the development sets are [publicly available](https://github.com/waste-wood/e-care).
|
14 |
|
15 |
## References
|