Edit model card

Catalan BERTa (roberta-large-ca-v2) large model

Table of Contents

Click to expand

Model description

The roberta-large-ca-v2 is a transformer-based masked language model for the Catalan language. It is based on the RoBERTA large model and has been trained on a medium-size corpus collected from publicly available corpora and crawlers.

Intended uses and limitations

roberta-large-ca-v2 model is ready-to-use only for masked language modeling to perform the Fill Mask task (try the inference API or read the next section). However, it is intended to be fine-tuned on non-generative downstream tasks such as Question Answering, Text Classification, or Named Entity Recognition.

How to use

Here is how to use this model:

from transformers import AutoModelForMaskedLM
from transformers import AutoTokenizer, FillMaskPipeline
from pprint import pprint
tokenizer_hf = AutoTokenizer.from_pretrained('projecte-aina/roberta-large-ca-v2')
model = AutoModelForMaskedLM.from_pretrained('projecte-aina/roberta-large-ca-v2')
model.eval()
pipeline = FillMaskPipeline(model, tokenizer_hf)
text = f"Em dic <mask>."
res_hf = pipeline(text)
pprint([r['token_str'] for r in res_hf])

Limitations and bias

At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.

Training

Training data

The training corpus consists of several corpora gathered from web crawling and public corpora.

Corpus Size in GB
Catalan Crawling 13.00
Wikipedia 1.10
DOGC 0.78
Catalan Open Subtitles 0.02
Catalan Oscar 4.00
CaWaC 3.60
Cat. General Crawling 2.50
Cat. Goverment Crawling 0.24
ACN 0.42
Padicat 0.63
RacoCatalá 8.10
Nació Digital 0.42
Vilaweb 0.06
Tweets 0.02

Training procedure

The training corpus has been tokenized using a byte version of Byte-Pair Encoding (BPE) used in the original RoBERTA model with a vocabulary size of 50,262 tokens. The RoBERTa-large pretraining consists of a masked language model training that follows the approach employed for the RoBERTa large model with the same hyperparameters as in the original work. The training lasted a total of 96 hours with 32 NVIDIA V100 GPUs of 16GB DDRAM.

Evaluation

CLUB benchmark

The BERTa-large model has been fine-tuned on the downstream tasks of the Catalan Language Understanding Evaluation benchmark (CLUB), that has been created along with the model.

It contains the following tasks and their related datasets:

  1. Named Entity Recognition (NER)

    NER (AnCora): extracted named entities from the original Ancora version, filtering out some unconventional ones, like book titles, and transcribed them into a standard CONLL-IOB format

  2. Part-of-Speech Tagging (POS)

    POS (AnCora): from the Universal Dependencies treebank of the well-known Ancora corpus.

  3. Text Classification (TC)

    TeCla: consisting of 137k news pieces from the Catalan News Agency (ACN) corpus, with 30 labels.

  4. Textual Entailment (TE)

    TE-ca: consisting of 21,163 pairs of premises and hypotheses, annotated according to the inference relation they have (implication, contradiction, or neutral), extracted from the Catalan Textual Corpus.

  5. Semantic Textual Similarity (STS)

    STS-ca: consisting of more than 3000 sentence pairs, annotated with the semantic similarity between them, scraped from the Catalan Textual Corpus.

  6. Question Answering (QA):

    VilaQuAD: contains 6,282 pairs of questions and answers, outsourced from 2095 Catalan language articles from VilaWeb newswire text.

    ViquiQuAD: consisting of more than 15,000 questions outsourced from Catalan Wikipedia randomly chosen from a set of 596 articles that were originally written in Catalan.

    CatalanQA: an aggregation of 2 previous datasets (VilaQuAD and ViquiQuAD), 21,427 pairs of Q/A balanced by type of question, containing one question and one answer per context, although the contexts can repeat multiple times.

    XQuAD-ca: the Catalan translation of XQuAD, a multilingual collection of manual translations of 1,190 question-answer pairs from English Wikipedia used only as a test set.

Here are the train/dev/test splits of the datasets:

Task (Dataset) Total Train Dev Test
NER (Ancora) 13,581 10,628 1,427 1,526
POS (Ancora) 16,678 13,123 1,709 1,846
STS (STS-ca) 3,073 2,073 500 500
TC (TeCla) 137,775 110,203 13,786 13,786
TE (TE-ca) 21,163 16,930 2,116 2,117
QA (VilaQuAD) 6,282 3,882 1,200 1,200
QA (ViquiQuAD) 14,239 11,255 1,492 1,429
QA (CatalanQA) 21,427 17,135 2,157 2,135

Evaluation results

Task NER (F1) POS (F1) STS-ca (Comb) TeCla (Acc.) TEca (Acc.) VilaQuAD (F1/EM) ViquiQuAD (F1/EM) CatalanQA (F1/EM) XQuAD-ca 1 (F1/EM)
RoBERTa-large-ca-v2 89.82 99.02 83.41 75.46 83.61 89.34/75.50 89.20/75.77 90.72/79.06 73.79/55.34
RoBERTa-base-ca-v2 89.29 98.96 79.07 74.26 83.14 87.74/72.58 88.72/75.91 89.50/76.63 73.64/55.42
BERTa 89.76 98.96 80.19 73.65 79.26 85.93/70.58 87.12/73.11 89.17/77.14 69.20/51.47
mBERT 86.87 98.83 74.26 69.90 74.63 82.78/67.33 86.89/73.53 86.90/74.19 68.79/50.80
XLM-RoBERTa 86.31 98.89 61.61 70.14 33.30 86.29/71.83 86.88/73.11 88.17/75.93 72.55/54.16

1 : Trained on CatalanQA, tested on XQuAD-ca.

Additional information

Author

Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)

Contact information

For further information, send an email to aina@bsc.es

Copyright

Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center

Licensing information

Apache License, Version 2.0

Funding

This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.

Citation information

If you use any of these resources (datasets or models) in your work, please cite our latest paper:

@inproceedings{armengol-estape-etal-2021-multilingual,
    title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
    author = "Armengol-Estap{\'e}, Jordi  and
      Carrino, Casimiro Pio  and
      Rodriguez-Penagos, Carlos  and
      de Gibert Bonet, Ona  and
      Armentano-Oller, Carme  and
      Gonzalez-Agirre, Aitor  and
      Melero, Maite  and
      Villegas, Marta",
    booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.findings-acl.437",
    doi = "10.18653/v1/2021.findings-acl.437",
    pages = "4933--4946",
}

Disclaimer

Click to expand

The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.

When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.

In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.

Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Collection including projecte-aina/roberta-large-ca-v2