--- annotations_creators: - crowdsourced language_creators: - crowdsourced language: - tg license: - cc-by-4.0 multilinguality: - monolingual --- After I realised problems with Automatic language identification (LangID), and bad quality of web-crawled text corpora for my Language. I curated my own dataset. Essentially I downloaded multiple versions of the Tajik subset of Leipzig Corpora Collection, which is comprised of texts from diverse sources like news, literature, and Wikipedia. I had to do some rigorous preprocessing by hard-coding heuristics and regexes and perform the steps below iteratively: - [X] deduplicating - [X] removing curse words - [X] any political bias - [X] any English character present - [X] removing words which don't exist in Tajik - [X] several hundred of non-tajik sentences