Edit model card

Catalan BERTa-v2 (roberta-base-ca-v2) finetuned for TeCla-based Text Classification.

Table of Contents

Click to expand

Model description

The roberta-base-ca-v2-cased-tc is a Text Classification (TC) model for the Catalan language fine-tuned from the roberta-base-ca-v2 model, a RoBERTa base model pre-trained on a medium-size corpus collected from publicly available corpora and crawlers (check the roberta-base-ca-v2 model card for more details).

The previous version of this model, which was trained on the old TeCla dataset (v1), can still be accessed through the "v1" tag.

Intended uses and limitations

roberta-base-ca-v2-cased-tc model can be used to classify texts. The model is limited by its training dataset and may not generalize well for all use cases.

How to use

Here is how to use this model:

from transformers import pipeline
from pprint import pprint

nlp = pipeline("text-classification", model="projecte-aina/roberta-base-ca-v2-cased-tc")
example = "Retards a quatre línies de Rodalies per una avaria entre Sants i plaça de Catalunya."

tc_results = nlp(example)
pprint(tc_results)

Limitations and bias

At the time of submission, no measures have been taken to estimate the bias embedded in the model. However, we are well aware that our models may be biased since the corpora have been collected using crawling techniques on multiple web sources. We intend to conduct research in these areas in the future, and if completed, this model card will be updated.

Training

Training data

We used the TC dataset in Catalan called TeCla for training and evaluation. Although TeCla includes a coarse-grained ('label1') and a fine-grained categorization ('label2'), only the last one, with 53 classes, was used for the training.

Training procedure

The model was trained with a batch size of 16 and a learning rate of 5e-5 for 5 epochs. We then selected the best checkpoint using the downstream task metric in the corresponding development set and then evaluated it on the test set.

Evaluation

Variable and metrics

This model was finetuned maximizing F1 (weighted).

Evaluation results

We evaluated the roberta-base-ca-v2-cased-tc on the TeCla test set against standard multilingual and monolingual baselines. The results for 'label1' categories were obtained through a mapping from the fine-grained category ('label2') to the corresponding coarse-grained one ('label1').

Model TeCla - label1 (Accuracy) TeCla - label2 (Accuracy)
roberta-base-ca-v2 96.31 80.34
roberta-large-ca-v2 96.51 80.68
mBERT 95.72 78.47
XLM-RoBERTa 95.66 78.01

For more details, check the fine-tuning and evaluation scripts in the official GitHub repository.

Additional information

Author

Text Mining Unit (TeMU) at the Barcelona Supercomputing Center (bsc-temu@bsc.es)

Contact information

For further information, send an email to aina@bsc.es

Copyright

Copyright (c) 2022 Text Mining Unit at Barcelona Supercomputing Center

Licensing information

Apache License, Version 2.0

Funding

This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.

Citation Information

If you use any of these resources (datasets or models) in your work, please cite our latest paper:

@inproceedings{armengol-estape-etal-2021-multilingual,
    title = "Are Multilingual Models the Best Choice for Moderately Under-resourced Languages? {A} Comprehensive Assessment for {C}atalan",
    author = "Armengol-Estap{\'e}, Jordi  and
      Carrino, Casimiro Pio  and
      Rodriguez-Penagos, Carlos  and
      de Gibert Bonet, Ona  and
      Armentano-Oller, Carme  and
      Gonzalez-Agirre, Aitor  and
      Melero, Maite  and
      Villegas, Marta",
    booktitle = "Findings of the Association for Computational Linguistics: ACL-IJCNLP 2021",
    month = aug,
    year = "2021",
    address = "Online",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.findings-acl.437",
    doi = "10.18653/v1/2021.findings-acl.437",
    pages = "4933--4946",
}

Disclaimer

Click to expand

The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.

When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.

In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.

Downloads last month
21
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train projecte-aina/roberta-base-ca-v2-cased-tc

Collection including projecte-aina/roberta-base-ca-v2-cased-tc

Evaluation results