Datasets:
task_categories:
- text-classification
language:
- en
task_ids:
- sentiment-classification
- hate-speech-detection
size_categories:
- 1K<n<10K
(to be updated...)
Description
Tweet Annotation Sensitivity Experiment 1
We drew a stratified sample of 20 tweets, that were pre-annotated in a study by Davidson et al. (2017) for Hate Speech / Offensive Language / Neither. The stratification was done with respect to majority-voted class and level of disagreement.
We then recruited 1000 Prolific workers to annotate each of the 20 tweets. Annotators were randomly selected into one of six experimental conditions. In these conditions they were asked to assign the labels Hate Speech / Offensive Language / Neither.
In addition, we collected a variety of demographic variables (e.g. age and gender) and some para data (e.g. duration of the whole task, duration per screen).
Citation
If you found the dataset useful, please cite:
@InProceedings{beck2022,
author="Beck, Jacob and Eckman, Stephanie and Chew, Rob and Kreuter, Frauke",
editor="Chen, Jessie Y. C. and Fragomeni, Gino and Degen, Helmut and Ntoa, Stavroula",
title="Improving Labeling Through Social Science Insights: Results and Research Agenda",
booktitle="HCI International 2022 -- Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence",
year="2022",
publisher="Springer Nature Switzerland",
address="Cham",
pages="245--261",
isbn="978-3-031-21707-4"
}