Update README.md
Browse files
README.md
CHANGED
@@ -6,3 +6,18 @@ widget:
|
|
6 |
- text: "MMG se dedica a la <mask> artificial."
|
7 |
|
8 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
6 |
- text: "MMG se dedica a la <mask> artificial."
|
7 |
|
8 |
---
|
9 |
+
|
10 |
+
# mlm-spanish-roberta-base
|
11 |
+
|
12 |
+
This model has a RoBERTa base architecture and was trained from scratch with 3.6 GB of raw text over 10 epochs. 4 Tesla V-100 GPUs were used for the training.
|
13 |
+
|
14 |
+
To test the quality of the resulting model we evaluate it over the [GLUES](https://github.com/dccuchile/GLUES) benchmark for Spanish NLU. The results are the following:
|
15 |
+
|
16 |
+
| Tarea | Score (metric) |
|
17 |
+
|:-----------------------:|:---------------------:|
|
18 |
+
| XNLI | 71.99 (accuracy) |
|
19 |
+
| Paraphrasing | 74.85 (accuracy) |
|
20 |
+
| NER | 85.34 (F1) |
|
21 |
+
| POS | 97.49 (accuracy) |
|
22 |
+
| Dependency Parsing | 85.14/81.08 (UAS/LAS) |
|
23 |
+
| Document Classification | 93.00 (accuracy) |
|