File size: 3,303 Bytes
54370c8
 
 
 
 
 
 
 
 
 
 
4f04a7c
54370c8
 
 
628cca2
 
 
 
 
 
512d59d
54370c8
512d59d
54370c8
512d59d
 
 
 
 
 
 
54370c8
512d59d
54370c8
ced9c16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
54370c8
512d59d
54370c8
512d59d
54370c8
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
---
language:
- tt
- en
- ru
license: mit
tags:
- gpt3
- transformers
- mgpt
---
# โ˜• Tatar mGPT 1.3B

Language model for Tatar. Model has 1.3B parameters as you can guess from it's name.

Tatar belongs to Turkic language family. It's a very resonant language with approximately 5.3 million speakers. Here are some facts about it:

1. Spoken primarily by the Tatars, mainly in the Republic of Tatarstan in Russia.
2. Historically, it has used both the Arabic and Cyrillic scripts, but the Latin script is also gaining popularity.
3. The Tatars have a rich history, notably their association with the Golden Horde in the 13th-15th centuries.

## Technical details

It's one of the models derived from the base [mGPT-XL (1.3B)](https://huggingface.co/ai-forever/mGPT) model (see the list below) which was originally trained on the 61 languages from 25 language families using Wikipedia and C4 corpus.

We've found additional data for 23 languages most of which are considered as minor and decided to further tune the base model. **Tatar mGPT 1.3B** was trained for another 5000 steps with batch_size=4 and context window of **2048** tokens on 1 A100.

Final perplexity for this model on validation is **3.69**.

_Chart of the training loss and perplexity:_

![](https://i.imgur.com/ME2FLcM.png)

## Other mGPT-1.3B models

- [๐Ÿ‡ฆ๐Ÿ‡ฒ mGPT-1.3B Armenian](https://huggingface.co/ai-forever/mGPT-1.3B-armenian)
- [๐Ÿ‡ฆ๐Ÿ‡ฟ mGPT-1.3B Azerbaijan](https://huggingface.co/ai-forever/mGPT-1.3B-azerbaijan)
- [๐Ÿฏ mGPT-1.3B Bashkir](https://huggingface.co/ai-forever/mGPT-1.3B-bashkir)
- [๐Ÿ‡ง๐Ÿ‡พ mGPT-1.3B Belorussian](https://huggingface.co/ai-forever/mGPT-1.3B-belorussian)
- [๐Ÿ‡ง๐Ÿ‡ฌ mGPT-1.3B Bulgarian](https://huggingface.co/ai-forever/mGPT-1.3B-bulgarian)
- [๐ŸŒž mGPT-1.3B Buryat](https://huggingface.co/ai-forever/mGPT-1.3B-buryat)
- [๐ŸŒณ mGPT-1.3B Chuvash](https://huggingface.co/ai-forever/mGPT-1.3B-chuvash)
- [๐Ÿ‡ฌ๐Ÿ‡ช mGPT-1.3B Georgian](https://huggingface.co/ai-forever/mGPT-1.3B-georgian)
- [๐ŸŒธ mGPT-1.3B Kalmyk](https://huggingface.co/ai-forever/mGPT-1.3B-kalmyk)
- [๐Ÿ‡ฐ๐Ÿ‡ฟ mGPT-1.3B Kazakh](https://huggingface.co/ai-forever/mGPT-1.3B-kazakh)
- [๐Ÿ‡ฐ๐Ÿ‡ฌ mGPT-1.3B Kirgiz](https://huggingface.co/ai-forever/mGPT-1.3B-kirgiz)
- [๐Ÿป mGPT-1.3B Mari](https://huggingface.co/ai-forever/mGPT-1.3B-mari)
- [๐Ÿ‡ฒ๐Ÿ‡ณ mGPT-1.3B Mongol](https://huggingface.co/ai-forever/mGPT-1.3B-mongol)
- [๐Ÿ† mGPT-1.3B Ossetian](https://huggingface.co/ai-forever/mGPT-1.3B-ossetian)
- [๐Ÿ‡ฎ๐Ÿ‡ท mGPT-1.3B Persian](https://huggingface.co/ai-forever/mGPT-1.3B-persian)
- [๐Ÿ‡ท๐Ÿ‡ด mGPT-1.3B Romanian](https://huggingface.co/ai-forever/mGPT-1.3B-romanian)
- [๐Ÿ‡น๐Ÿ‡ฏ mGPT-1.3B Tajik](https://huggingface.co/ai-forever/mGPT-1.3B-tajik)
- [๐Ÿ‡น๐Ÿ‡ฒ mGPT-1.3B Turkmen](https://huggingface.co/ai-forever/mGPT-1.3B-turkmen)
- [๐ŸŽ mGPT-1.3B Tuvan](https://huggingface.co/ai-forever/mGPT-1.3B-tuvan)
- [๐Ÿ‡บ๐Ÿ‡ฆ mGPT-1.3B Ukranian](https://huggingface.co/ai-forever/mGPT-1.3B-ukranian)
- [๐Ÿ‡บ๐Ÿ‡ฟ mGPT-1.3B Uzbek](https://huggingface.co/ai-forever/mGPT-1.3B-uzbek)
- [๐Ÿ’Ž mGPT-1.3B Yakut](https://huggingface.co/ai-forever/mGPT-1.3B-yakut)

## Feedback

If you'll found a bug of have additional data to train model on your language โ€” please, give us feedback.

Model will be improved over time. Stay tuned!