Datasets:
Update README.md
Browse files
README.md
CHANGED
@@ -48,7 +48,7 @@ Therefore, we can refer to our groups as their cross-product: **{BO, BY, WO, WY,
|
|
48 |
We re-annotate N=480 instances
|
49 |
six times (for six demographic groups), comprising
|
50 |
240 instances labeled as positive, and 240 instances
|
51 |
-
labeled as negative in the DynaSent Round 2 test
|
52 |
set (see [[2]](#2)). This amounts to 2,880
|
53 |
annotations, in total.
|
54 |
To annotate rationales, we formulate the task as
|
@@ -56,13 +56,20 @@ marking 'supporting evidence' for the label, following how the task is defined b
|
|
56 |
all the words, in the sentence, they think shows
|
57 |
evidence for their chosen label.
|
58 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
59 |
### SST2
|
60 |
|
61 |
We re-annotate N=263 instances six
|
62 |
times (for six demographic groups), which are all
|
63 |
the positive and negative instances from the Zuco*
|
64 |
dataset of Hollenstein et al. (2018), comprising a
|
65 |
-
mixture of train, validation and test set instances
|
66 |
from SST-2, *which should be removed from the original SST
|
67 |
data before training any model*.
|
68 |
|
@@ -74,12 +81,19 @@ sentiment that they do not see.
|
|
74 |
*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,
|
75 |
we add an extra layer of information for future research.
|
76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
### CoS-E
|
78 |
|
79 |
We use the simplified version of CoS-E released by [[6]](#6).
|
80 |
|
81 |
We re-annotate N=500 instances from
|
82 |
-
the test set six times (for six demographic groups)
|
83 |
and ask annotators to firstly select the answer to
|
84 |
the question that they find most correct and sensible, and then mark words that justifies that answer.
|
85 |
Following [[7]](#7), we specify the
|
@@ -91,6 +105,10 @@ think that removing it will decrease your
|
|
91 |
confidence toward your chosen label,
|
92 |
please mark it.’
|
93 |
|
|
|
|
|
|
|
|
|
94 |
### Dataset Sources
|
95 |
|
96 |
<!-- Provide the basic links for the dataset. -->
|
@@ -105,6 +123,9 @@ In our paper, we present a collection of three
|
|
105 |
existing datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided
|
106 |
by different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.
|
107 |
|
|
|
|
|
|
|
108 |
|
109 |
## Dataset Structure
|
110 |
|
@@ -115,25 +136,24 @@ by different socio-demographic groups. Such profiling enables us to ask whose ri
|
|
115 |
| QID | The ID of the Question (i.e. the annotation element/sentence) in the Qualtrics survey. Every second question asked for the classification and every other asked for the rationale, of the classification, to be marked. These two questions and answers for the same sentence is merged to one row and therefore the QID looks as if every second is skipped. |
|
116 |
| text_id | A numerical ID given to each unique text/sentence for easy sorting before comparing annotations across groups. |
|
117 |
| sentence | The text/sentence that is annotated, in it's original formatting. |
|
118 |
-
| original_label | The label from the original dataset (Cose/Dynasent/SST). |
|
119 |
| label | The (new) label given by the respective annotator/participant from Prolific. |
|
120 |
| label_index | The numerical format of the (new) label. |
|
|
|
121 |
| rationale | The tokens marked as rationales by our annotators. |
|
122 |
| rationale_index | The indeces of the tokens marked as rationales. In the processed files the index start at 0. However in the unprocessed files ("_all.csv", "_before_exclussions.csv") the index starts at 1.|
|
|
|
123 |
| age | The reported age of the annotator/participant (i.e. their survey response). This may be different from the age-interval the participant was recruited by (see recruitment_age). |
|
|
|
124 |
| ethnicity | The reported ethnicity of the annotator/participant. This may be different from the ethnicity the participant was recruited by (see recruitment_ethnicity). |
|
|
|
125 |
| gender | The reported gender of the annotator/participant. |
|
126 |
| english_proficiency | The reported English-speaking ability (proxy for English proficiency) of the annotator/participant. Options were "Not well", "Well" or "Very well". |
|
127 |
| attentioncheck | All participants were given a simple attention check question at the very end of the Qualtrics survey (i.e. after annotation) which was either PASSED or FAILED. Participants who failed the check were still paid for their work, but their response should be excluded from the analysis. |
|
|
|
128 |
| originaldata_id | The id given to the text/sentence in the original dataset. In the case of SST data, this refers to ids within the Zuco dataset – a subset of SST which was used in our study.|
|
|
|
129 |
| sst2_id | The processed SST annotations contain an extra column with the index of the text in the SST-2 dataset. -1 means that we were unable to match the text to an instance in SST-2 |
|
130 |
-
|
|
131 |
-
| recruitment_ethnicity | The ethnicity specified for the Prolific job to recruit the participant by. Sometimes there is a mismatch between the information Prolific has on participants (which we use for recruitment) and what the participants report when asked again in the survey/task. This seems especially prevalent with some ethnicities, likely because participants may in reality identify with more than one ethnic group. |
|
132 |
-
| recruitment_age | The age interval specified for the Prolific job to recruit the participant by. A mismatch between this and the participant's reported age, when asked in our survey, may mean a number of things, such as: Prolific's information is wrong or outdated; the participant made a mistake when answering the question; the participant was inattentive. |
|
133 |
-
| originaldata_split | The set (train, val, test) of which the text/sentence appears in, in the original dataset. Included to make it easier to match the instances to the original datasets. |
|
134 |
-
| sst2_split | In the case of SST annoations, there is an extra column refering to the set which the instance appears in within SST-2. Some instances a part of the train set and should therefore be removed before training a model on SST-2 and testing on our annotations. |
|
135 |
-
| rationale_binary | A binary version of the rationales where a token marked as part of the rationale = 1 and tokens not marked = 0. |
|
136 |
-
|
137 |
|
138 |
|
139 |
## Dataset Creation
|
|
|
48 |
We re-annotate N=480 instances
|
49 |
six times (for six demographic groups), comprising
|
50 |
240 instances labeled as positive, and 240 instances
|
51 |
+
labeled as negative in the DynaSent Round 2 **test**
|
52 |
set (see [[2]](#2)). This amounts to 2,880
|
53 |
annotations, in total.
|
54 |
To annotate rationales, we formulate the task as
|
|
|
56 |
all the words, in the sentence, they think shows
|
57 |
evidence for their chosen label.
|
58 |
|
59 |
+
#### Our annotations
|
60 |
+
negative 1555 |
|
61 |
+
positive 1435 |
|
62 |
+
no sentiment 470
|
63 |
+
Total 3460
|
64 |
+
|
65 |
+
|
66 |
### SST2
|
67 |
|
68 |
We re-annotate N=263 instances six
|
69 |
times (for six demographic groups), which are all
|
70 |
the positive and negative instances from the Zuco*
|
71 |
dataset of Hollenstein et al. (2018), comprising a
|
72 |
+
**mixture of train, validation and test** set instances
|
73 |
from SST-2, *which should be removed from the original SST
|
74 |
data before training any model*.
|
75 |
|
|
|
81 |
*The Zuco data contains eye-tracking data for 400 instances from SST. By annotating some of these with rationales,
|
82 |
we add an extra layer of information for future research.
|
83 |
|
84 |
+
#### Our annotations
|
85 |
+
positive 1027 |
|
86 |
+
negative 900 |
|
87 |
+
no sentiment 163
|
88 |
+
Total 2090
|
89 |
+
|
90 |
+
|
91 |
### CoS-E
|
92 |
|
93 |
We use the simplified version of CoS-E released by [[6]](#6).
|
94 |
|
95 |
We re-annotate N=500 instances from
|
96 |
+
the CoS-E **test** set six times (for six demographic groups)
|
97 |
and ask annotators to firstly select the answer to
|
98 |
the question that they find most correct and sensible, and then mark words that justifies that answer.
|
99 |
Following [[7]](#7), we specify the
|
|
|
105 |
confidence toward your chosen label,
|
106 |
please mark it.’
|
107 |
|
108 |
+
#### Our annotations
|
109 |
+
Total 3760
|
110 |
+
|
111 |
+
|
112 |
### Dataset Sources
|
113 |
|
114 |
<!-- Provide the basic links for the dataset. -->
|
|
|
123 |
existing datasets (SST2, DynaSent and Cos-E) with demographics-augmented annotations to enable profiling of models, i.e., quantifying their alignment (or agreement) with rationales provided
|
124 |
by different socio-demographic groups. Such profiling enables us to ask whose right reasons models are being right for and fosters future research on performance equality/robustness.
|
125 |
|
126 |
+
For each dataset, we provide the data under a unique **'test'** split, as its original itended used was to test quality & alignment of post-hoc explainability methods.
|
127 |
+
If you use it following a different split, please clarify it to ease reproducibility of your work.
|
128 |
+
|
129 |
|
130 |
## Dataset Structure
|
131 |
|
|
|
136 |
| QID | The ID of the Question (i.e. the annotation element/sentence) in the Qualtrics survey. Every second question asked for the classification and every other asked for the rationale, of the classification, to be marked. These two questions and answers for the same sentence is merged to one row and therefore the QID looks as if every second is skipped. |
|
137 |
| text_id | A numerical ID given to each unique text/sentence for easy sorting before comparing annotations across groups. |
|
138 |
| sentence | The text/sentence that is annotated, in it's original formatting. |
|
|
|
139 |
| label | The (new) label given by the respective annotator/participant from Prolific. |
|
140 |
| label_index | The numerical format of the (new) label. |
|
141 |
+
| original_label | The label from the original dataset (Cose/Dynasent/SST). |
|
142 |
| rationale | The tokens marked as rationales by our annotators. |
|
143 |
| rationale_index | The indeces of the tokens marked as rationales. In the processed files the index start at 0. However in the unprocessed files ("_all.csv", "_before_exclussions.csv") the index starts at 1.|
|
144 |
+
| rationale_binary | A binary version of the rationales where a token marked as part of the rationale = 1 and tokens not marked = 0. |
|
145 |
| age | The reported age of the annotator/participant (i.e. their survey response). This may be different from the age-interval the participant was recruited by (see recruitment_age). |
|
146 |
+
| recruitment_age | The age interval specified for the Prolific job to recruit the participant by. A mismatch between this and the participant's reported age, when asked in our survey, may mean a number of things, such as: Prolific's information is wrong or outdated; the participant made a mistake when answering the question; the participant was inattentive. |
|
147 |
| ethnicity | The reported ethnicity of the annotator/participant. This may be different from the ethnicity the participant was recruited by (see recruitment_ethnicity). |
|
148 |
+
| recruitment_ethnicity | The ethnicity specified for the Prolific job to recruit the participant by. Sometimes there is a mismatch between the information Prolific has on participants (which we use for recruitment) and what the participants report when asked again in the survey/task. This seems especially prevalent with some ethnicities, likely because participants may in reality identify with more than one ethnic group. |
|
149 |
| gender | The reported gender of the annotator/participant. |
|
150 |
| english_proficiency | The reported English-speaking ability (proxy for English proficiency) of the annotator/participant. Options were "Not well", "Well" or "Very well". |
|
151 |
| attentioncheck | All participants were given a simple attention check question at the very end of the Qualtrics survey (i.e. after annotation) which was either PASSED or FAILED. Participants who failed the check were still paid for their work, but their response should be excluded from the analysis. |
|
152 |
+
| group_id | An id describing the socio-demographic subgroup a participant belongs to and was recruited by. |
|
153 |
| originaldata_id | The id given to the text/sentence in the original dataset. In the case of SST data, this refers to ids within the Zuco dataset – a subset of SST which was used in our study.|
|
154 |
+
| annotator_ID | Anonymised annotator ID to enable analysis such as annotators (dis)agreement |
|
155 |
| sst2_id | The processed SST annotations contain an extra column with the index of the text in the SST-2 dataset. -1 means that we were unable to match the text to an instance in SST-2 |
|
156 |
+
| sst2_split | The processed SST annotations contain an extra column refering to the set which the instance appears in within SST-2. Some instances a part of the train set and should therefore be removed before training a model on SST-2 and testing on our annotations. |
|
|
|
|
|
|
|
|
|
|
|
|
|
157 |
|
158 |
|
159 |
## Dataset Creation
|