File size: 4,331 Bytes
84092c4 a831340 fd5b3b0 a831340 a1b6ac0 fd5b3b0 a831340 fd5b3b0 a831340 fd5b3b0 a1b6ac0 fd5b3b0 a831340 fd5b3b0 a831340 7dcf752 fd5b3b0 a831340 fd5b3b0 a1b6ac0 fd5b3b0 a831340 a1b6ac0 95b8c83 a1b6ac0 72fb5ee |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 |
---
language:
- en
- ha
- yo
- ig
- pcm
pipeline_tag: fill-mask
---
# NaijaXLM-T-base
This is a XLM-Roberta-base model further pretrained on 2.2 billion Nigerian tweets, described and evaluated in the [reference paper](https://aclanthology.org/2024.acl-long.488/). This model was developed by [@pvcastro](https://huggingface.co/pvcastro) and [@manueltonneau](https://huggingface.co/manueltonneau).
## Model Details
### Model Description
<!-- Provide a longer summary of what this model is. -->
- **Model type:** xlm-roberta
- **Language(s) (NLP):** (Nigerian) English, Nigerian Pidgin, Hausa, Yoruba, Igbo
- **Finetuned from model [optional]:** xlm-roberta-base
### Model Sources [optional]
<!-- Provide the basic links for the model. -->
- **Repository:** https://github.com/worldbank/NaijaHate
- **Paper:** https://aclanthology.org/2024.acl-long.488/
## Training Details
### Training Data
<!-- This should link to a Dataset Card, perhaps with a short stub of information on what the training data is all about as well as documentation related to data pre-processing or additional filtering. -->
The model was further pre-trained on 2.2 billion tweets posted between March 2007 and July 2023 and forming the timelines of 2.8 million Twitter users with a profile location in Nigeria.
### Training Procedure
<!-- This relates heavily to the Technical Specifications. Content here should link to that section when it is relevant to the training procedure. -->
We performed an adaptive fine tuning of XLM-R on the Nigerian Twitter dataset.
We kept the same vocabulary as XLM-R and trained the model until convergence for a total of one epoch, using 1\% of the dataset as validation set. The training procedure was conducted in a distributed environment, for approximately 10 days, using 4 nodes with 4 RTX 8000 GPUs each, with a total batch size of 576.
## Evaluation
<!-- This section describes the evaluation protocols and provides the results. -->
## BibTeX entry and citation information
Please cite the [reference paper](https://aclanthology.org/2024.acl-long.488/) if you use this model.
```bibtex
@inproceedings{tonneau-etal-2024-naijahate,
title = "{N}aija{H}ate: Evaluating Hate Speech Detection on {N}igerian {T}witter Using Representative Data",
author = "Tonneau, Manuel and
Quinta De Castro, Pedro and
Lasri, Karim and
Farouq, Ibrahim and
Subramanian, Lakshmi and
Orozco-Olvera, Victor and
Fraiberger, Samuel",
editor = "Ku, Lun-Wei and
Martins, Andre and
Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.488",
pages = "9020--9040",
abstract = "To address the global issue of online hate, hate speech detection (HSD) systems are typically developed on datasets from the United States, thereby failing to generalize to English dialects from the Majority World. Furthermore, HSD models are often evaluated on non-representative samples, raising concerns about overestimating model performance in real-world settings. In this work, we introduce NaijaHate, the first dataset annotated for HSD which contains a representative sample of Nigerian tweets. We demonstrate that HSD evaluated on biased datasets traditionally used in the literature consistently overestimates real-world performance by at least two-fold. We then propose NaijaXLM-T, a pretrained model tailored to the Nigerian Twitter context, and establish the key role played by domain-adaptive pretraining and finetuning in maximizing HSD performance. Finally, owing to the modest performance of HSD systems in real-world conditions, we find that content moderators would need to review about ten thousand Nigerian tweets flagged as hateful daily to moderate 60{\%} of all hateful content, highlighting the challenges of moderating hate speech at scale as social media usage continues to grow globally. Taken together, these results pave the way towards robust HSD systems and a better protection of social media users from hateful content in low-resource settings.",
}
``` |