File size: 7,172 Bytes
dcc84f1 39397e9 dcc84f1 39397e9 bafedc7 39397e9 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 |
---
license: apache-2.0
---
## Projecte Aina’s Italian-Catalan machine translation model
## Table of Contents
- [Model Description](#model-description)
- [Intended Uses and Limitations](#intended-use)
- [How to Use](#how-to-use)
- [Training](#training)
- [Training data](#training-data)
- [Training procedure](#training-procedure)
- [Data Preparation](#data-preparation)
- [Tokenization](#tokenization)
- [Hyperparameters](#hyperparameters)
- [Evaluation](#evaluation)
- [Variable and Metrics](#variable-and-metrics)
- [Evaluation Results](#evaluation-results)
- [Additional Information](#additional-information)
- [Author](#author)
- [Contact Information](#contact-information)
- [Copyright](#copyright)
- [Licensing Information](#licensing-information)
- [Funding](#funding)
- [Disclaimer](#disclaimer)
## Model description
This model was trained from scratch using the [Fairseq toolkit](https://fairseq.readthedocs.io/en/latest/) on a combination of Catalan-Italian datasets, which after filtering and cleaning comprised 9.482.927 sentence pairs. The model was evaluated on the Flores and NTREX evaluation datasets.
## Intended uses and limitations
You can use this model for machine translation from Italian to Catalan.
## How to use
### Usage
Required libraries:
```bash
pip install ctranslate2 pyonmttok
```
Translate a sentence using python
```python
import ctranslate2
import pyonmttok
from huggingface_hub import snapshot_download
model_dir = snapshot_download(repo_id="projecte-aina/mt-aina-it-ca", revision="main")
tokenizer=pyonmttok.Tokenizer(mode="none", sp_model_path = model_dir + "/spm.model")
tokenized=tokenizer.tokenize("Benvingut al projecte Aina!")
translator = ctranslate2.Translator(model_dir)
translated = translator.translate_batch([tokenized[0]])
print(tokenizer.detokenize(translated[0][0]['tokens']))
```
## Training
### Training data
The model was trained on a combination of the following datasets:
| Dataset | Sentences | Sentences after Cleaning|
|-------------------|----------------|-------------------|
| CCMatrix v1 | 11.444.720 | 7.757.357|
| MultiCCAligned v1 | 1.379.251 | 1.010.921|
| WikiMatrix | 316.208 | 271.587 |
| GNOME | 8.571 | 1.198|
| KDE4 | 163.907 | 115.027 |
| QED | 64.630 | 52.616 |
| TED2020 v1 | 50.897 | 43.280 |
| OpenSubtitles | 391.293 | 225.732 |
| GlobalVoices| 6.318 | 5.209|
| **Total** | **13.825.795** | **9.482.927** |
### Training procedure
### Data preparation
All datasets are deduplicated and filtered to remove any sentence pairs with a cosine similarity of less than 0.75. This is done using sentence embeddings calculated using [LaBSE](https://huggingface.co/sentence-transformers/LaBSE). The filtered datasets are then concatenated to form a final corpus of 9.482.927 and before training the punctuation is normalized using a modified version of the join-single-file.py script from [SoftCatalà](https://github.com/Softcatala/nmt-models/blob/master/data-processing-tools/join-single-file.py)
#### Tokenization
All data is tokenized using sentencepiece, with a 50 thousand token sentencepiece model learned from the combination of all filtered training data. This model is included.
#### Hyperparameters
The model is based on the Transformer-XLarge proposed by [Subramanian et al.](https://aclanthology.org/2021.wmt-1.18.pdf)
The following hyperparameters were set on the Fairseq toolkit:
| Hyperparameter | Value |
|------------------------------------|----------------------------------|
| Architecture | transformer_vaswani_wmt_en_de_big |
| Embedding size | 1024 |
| Feedforward size | 4096 |
| Number of heads | 16 |
| Encoder layers | 24 |
| Decoder layers | 6 |
| Normalize before attention | True |
| --share-decoder-input-output-embed | True |
| --share-all-embeddings | True |
| Effective batch size | 48.000 |
| Optimizer | adam |
| Adam betas | (0.9, 0.980) |
| Clip norm | 0.0 |
| Learning rate | 5e-4 |
| Lr. schedurer | inverse sqrt |
| Warmup updates | 8000 |
| Dropout | 0.1 |
| Label smoothing | 0.1 |
The model was trained for a total of 19.000 updates. Weights were saved every 1000 updates and reported results are the average of the last 4 checkpoints.
## Evaluation
### Variable and metrics
We use the BLEU score for evaluation on the Flores test set: [Flores-101](https://github.com/facebookresearch/flores),
### Evaluation results
Below are the evaluation results on the machine translation from Catalan to Italian compared to [Softcatalà](https://www.softcatala.org/) and [Google Translate](https://translate.google.es/?hl=es):
| Test set | SoftCatalà | Google Translate |mt-aina-it-ca|
|----------------------|------------|------------------|---------------|
| Flores 101 dev | 25,4 | **30,4** | 27,4 |
| Flores 101 devtest |26,6 | **31,2** | 27,9 |
| Average | 26,0 | **30,8** | 27,7 |
## Additional information
### Author
Language Technologies Unit (LangTech) at the Barcelona Supercomputing Center (langtech@bsc.es)
### Contact information
For further information, send an email to <aina@bsc.es>
### Copyright
Copyright Language Technologies Unit at Barcelona Supercomputing Center (2023)
### Licensing information
This work is licensed under a [Apache License, Version 2.0](https://www.apache.org/licenses/LICENSE-2.0)
### Funding
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
### Disclaimer
<details>
<summary>Click to expand</summary>
The models published in this repository are intended for a generalist purpose and are available to third parties. These models may have bias and/or any other undesirable distortions.
When third parties, deploy or provide systems and/or services to other parties using any of these models (or using systems based on these models) or become users of the models, they should note that it is their responsibility to mitigate the risks arising from their use and, in any event, to comply with applicable regulations, including regulations regarding the use of Artificial Intelligence.
In no event shall the owner and creator of the models (BSC – Barcelona Supercomputing Center) be liable for any results arising from the use made by third parties of these models.
</details>
|