Edit model card

mlm-spanish-roberta-base

This model has a RoBERTa base architecture and was trained from scratch with 3.6 GB of raw text over 10 epochs. 4 Tesla V-100 GPUs were used for the training.

To test the quality of the resulting model we evaluate it over the GLUES benchmark for Spanish NLU. The results are the following:

Task Score (metric)
XNLI 71.99 (accuracy)
Paraphrasing 74.85 (accuracy)
NER 85.34 (F1)
POS 97.49 (accuracy)
Dependency Parsing 85.14/81.08 (UAS/LAS)
Document Classification 93.00 (accuracy)
Downloads last month
9
Safetensors
Model size
126M params
Tensor type
I64
·
F32
·
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Model tree for MMG/mlm-spanish-roberta-base

Finetunes
2 models