theyorubayesian commited on
Commit
1d4bcb8
1 Parent(s): 2dbf6f8

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +21 -13
README.md CHANGED
@@ -473,18 +473,26 @@ Note that we must pass `verification_mode="no_checks` to prevent HF from verifyi
473
  # Citation
474
 
475
  ```
476
- @article{OladipoBQPD2023EMNLP,
477
- title = "Better Quality Pre-training Data and T5 Models for African Languages",
478
- author = "Oladipo, Akintunde and
479
- Adeyemi, Mofetoluwa and
480
- Ahia, Orevaoghene and
481
- Owodunni, Abraham and
482
- Ogundepo, Odunayo and
483
- Adelani, David and
484
- Lin, Jimmy
485
- ",
486
- booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
487
- publisher = "Association for Computational Linguistics",
488
- year = "2023",
 
 
 
 
 
 
 
489
  }
 
490
  ```
 
473
  # Citation
474
 
475
  ```
476
+ @inproceedings{oladipo-etal-2023-better,
477
+ title = "Better Quality Pre-training Data and T5 Models for {A}frican Languages",
478
+ author = "Oladipo, Akintunde and
479
+ Adeyemi, Mofetoluwa and
480
+ Ahia, Orevaoghene and
481
+ Owodunni, Abraham and
482
+ Ogundepo, Odunayo and
483
+ Adelani, David and
484
+ Lin, Jimmy",
485
+ editor = "Bouamor, Houda and
486
+ Pino, Juan and
487
+ Bali, Kalika",
488
+ booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
489
+ month = dec,
490
+ year = "2023",
491
+ address = "Singapore",
492
+ publisher = "Association for Computational Linguistics",
493
+ url = "https://aclanthology.org/2023.emnlp-main.11",
494
+ pages = "158--168",
495
+ abstract = "In this study, we highlight the importance of enhancing the quality of pretraining data in multilingual language models. Existing web crawls have demonstrated quality issues, particularly in the context of low-resource languages. Consequently, we introduce a new multilingual pretraining corpus for 16 African languages, designed by carefully auditing existing pretraining corpora to understand and rectify prevalent quality issues. To compile this dataset, we undertake a rigorous examination of current data sources for thirteen languages within one of the most extensive multilingual web crawls, mC4, and extract cleaner data through meticulous auditing and improved web crawling strategies. Subsequently, we pretrain a new T5-based model on this dataset and evaluate its performance on multiple downstream tasks. Our model demonstrates better downstream effectiveness over existing pretrained models across four NLP tasks, underscoring the critical role data quality plays in pretraining language models in low-resource scenarios. Specifically, on cross-lingual QA evaluation, our new model is more than twice as effective as multilingual T5. All code, data and models are publicly available at https://github.com/castorini/AfriTeVa-keji.",
496
  }
497
+
498
  ```