Edit model card

Nordic ELECTRA-Small

This model was pretrained on the following corpora:

The total size of the corpus after document-level deduplication and filtering was 14.82B tokens, split equally between the four languages. The model was trained using a WordPiece tokenizer with a vocabulary size of 96,105 for one million steps with a batch size of 256, and otherwise with default settings.

Acknowledgments

This research was supported with Cloud TPUs from Google's TPU Research Cloud (TRC).

This project was funded by the Language Technology Programme for Icelandic 2019-2023. The programme, which is managed and coordinated by Almannarómur, is funded by the Icelandic Ministry of Education, Science and Culture.

Downloads last month
4,653
Inference API
Unable to determine this model’s pipeline type. Check the docs .

Model tree for jonfd/electra-small-nordic

Finetunes
1 model

Datasets used to train jonfd/electra-small-nordic

Spaces using jonfd/electra-small-nordic 3