File size: 2,005 Bytes
7a78a8f 3e0d751 81864d3 cdc8aa3 4fc35ea cdc8aa3 4fc35ea cdc8aa3 52be592 81864d3 3fa768b 81864d3 5a54bf3 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 |
---
license: mit
datasets:
- brwac
- carolina-c4ai/corpus-carolina
language:
- pt
---
# DeBERTinha XSmall (aka "debertinha-ptbr-xsmall")
## NOTE
We have received feedback of people getting poor results on unbalanced datasets. A more robust training script, like scaling
the loss and adding weight decay (1e-3 to 1e-5) seems to fix it.
Please refer to [this notebook](https://colab.research.google.com/drive/1mYsAk6RgzWsSGmRzcE4mV-UqM9V7_Jes?usp=sharing) to check how performance
on unbalanced datasets can be improved.
If you have any problems using the model, please contact us.
Thanks!
## Introduction
DeBERTinha is a pretrained DeBERTa model for Brazilian Portuguese.
## Available models
| Model | Arch. | #Params |
| ---------------------------------------- | ---------- | ------- |
| `sagui-nlp/debertinha-ptbr-xsmall` | DeBERTa-V3-Xsmall | 40M |
## Usage
```python
from transformers import AutoTokenizer
from transformers import AutoModelForPreTraining
from transformers import AutoModel
model = AutoModelForPreTraining.from_pretrained('sagui-nlp/debertinha-ptbr-xsmall')
tokenizer = AutoTokenizer.from_pretrained('sagui-nlp/debertinha-ptbr-xsmall')
```
### For embeddings
```python
import torch
model = AutoModel.from_pretrained('sagui-nlp/debertinha-ptbr-xsmall')
input_ids = tokenizer.encode('Tinha uma pedra no meio do caminho.', return_tensors='pt')
with torch.no_grad():
outs = model(input_ids)
encoded = outs.last_hidden_state[0, 0] # Take [CLS] special token representation
```
## Citation
If you use our work, please cite:
```
@misc{campiotti2023debertinha,
title={DeBERTinha: A Multistep Approach to Adapt DebertaV3 XSmall for Brazilian Portuguese Natural Language Processing Task},
author={Israel Campiotti and Matheus Rodrigues and Yuri Albuquerque and Rafael Azevedo and Alyson Andrade},
year={2023},
eprint={2309.16844},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
``` |