Edit model card

Generative Pretrained Transformer with 1.5B parameteres for Finnish.

TurkuNLP Finnish GPT-3-models are a model family of pretrained monolingual GPT-style language models that are based on BLOOM-architecture. Note that the models are pure language models, meaning that they are not instruction finetuned for dialogue or answering questions.

These models are intended to be used as foundational models that can be e.g. instruction finetuned to serve as modern chat-models.

All models are trained for 300B tokens.

Parameters

Model Layers Dim Heads Params
Small 12 768 12 186M
Medium 24 1024 16 437M
Large 24 1536 16 881M
XL 24 2064 24 1.5B
”3B” 32 2560 32 2.8B
”8B” 32 4096 32 7.5B
"13B" 40 5120 40 13.3B

Datasets

We used a combination of multiple Finnish resources.

Sampling ratios

Dataset Chars Ratio Weight W.Ratio
Parsebank 35.0B 16.9% 1.5 22.7%
mC4-Fi 46.3B 22.4% 1.0 20.0%
CC-Fi 79.6B 38.5% 1.0 34.4%
Fiwiki 0.8B 0.4% 3.0 1.0%
Lönnrot 0.8B 0.4% 3.0 1.0%
Yle 1.6B 0.8% 2.0 1.4%
STT 2.2B 1.1% 2.0 1.9%
ePub 13.5B 6.5% 1.0 5.8%
Lehdet 5.8B 2.8% 1.0 2.5%
Suomi24 20.6B 9.9% 1.0 8.9%
Reddit-Fi 0.7B 0.4% 1.0 0.3%
TOTAL 207.0B 100.0% N/A 100.0%

More documentation and a paper coming soon.

Downloads last month
72
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for TurkuNLP/gpt3-finnish-xl

Finetunes
1 model