Update README.md
Browse files
README.md
CHANGED
@@ -19,12 +19,12 @@ aristoBERTo is a pre-trained model for ancient Greek, a low resource language.
|
|
19 |
|
20 |
Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdeberta in most downstream tasks like the labeling of POS, MORPH, DEP and LEMMA.
|
21 |
|
22 |
-
aristoBERTo is provided by the Diogenet project of the University of California, San Diego.
|
23 |
|
24 |
|
25 |
## Intended uses
|
26 |
|
27 |
-
This model was created for fine-tuning with spaCy and the Universal Dependency datasets
|
28 |
|
29 |
|
30 |
It achieves the following results on the evaluation set:
|
|
|
19 |
|
20 |
Applied to the processing of ancient Greek, aristoBERTo outperforms xlm-roberta-base and mdeberta in most downstream tasks like the labeling of POS, MORPH, DEP and LEMMA.
|
21 |
|
22 |
+
aristoBERTo is provided by the [Diogenet project](https://diogenet.ucsd.edu) of the University of California, San Diego.
|
23 |
|
24 |
|
25 |
## Intended uses
|
26 |
|
27 |
+
This model was created for fine-tuning with spaCy and the ancient Greek Universal Dependency datasets and a NER corpus produced by the Diogenet project.
|
28 |
|
29 |
|
30 |
It achieves the following results on the evaluation set:
|