This is a pruned version of the google/mt5-large
model. Here, the input and output embeddings are pruned to support a greatly reduced vocabulary.
The chosen vocabulary has 30K norwegian, english and special tokens, ~12% of the old size. This reduces the model size by roughly 37%.
The model is still OK on similar languages, like German and Danish, but very different languages like arabic are not a good fit anymore.
This model is intended as a starting point for finetuning mt5 for norwegian applications.
- Downloads last month
- 4
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social
visibility and check back later, or deploy to Inference Endpoints (dedicated)
instead.