Update README.md
Browse files
README.md
CHANGED
@@ -20,6 +20,8 @@ Breton-French translator `m2m100_418M_br_fr`
|
|
20 |
|
21 |
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on a Breton-French parallel corpus. In order to obtain the best possible results, we use all our parallel data on training and consequently report no quantitative evaluation at this time. Empirical qualitative evidence suggests that the translations are generally adequate for short and simple examples, the behaviour of the model on long and/or complex inputs is currently unknown.
|
22 |
|
|
|
|
|
23 |
## Model description
|
24 |
|
25 |
See the description of the [base model](https://huggingface.co/facebook/m2m100_418M).
|
@@ -40,7 +42,7 @@ These are obtained from the [OPUS](https://opus.nlpl.eu/) base (Tiedemann, 2012)
|
|
40 |
|
41 |
## Training procedure
|
42 |
|
43 |
-
The training hyperparameters are those suggested by Adelani et al. (2022) in their [code release](
|
44 |
|
45 |
More specifically, we use the [example training script](https://github.com/huggingface/transformers/blob/674f750a57431222fa2832503a108df3badf1564/examples/pytorch/translation/run_translation.py) provided by 🤗 Transformers for fine-tuning mBART with the following command
|
46 |
|
|
|
20 |
|
21 |
This model is a fine-tuned version of [facebook/m2m100_418M](https://huggingface.co/facebook/m2m100_418M) on a Breton-French parallel corpus. In order to obtain the best possible results, we use all our parallel data on training and consequently report no quantitative evaluation at this time. Empirical qualitative evidence suggests that the translations are generally adequate for short and simple examples, the behaviour of the model on long and/or complex inputs is currently unknown.
|
22 |
|
23 |
+
Try this model online in [Troer](https://huggingface.co/spaces/lgrobol/troer), feedback and suggestions are welcome!
|
24 |
+
|
25 |
## Model description
|
26 |
|
27 |
See the description of the [base model](https://huggingface.co/facebook/m2m100_418M).
|
|
|
42 |
|
43 |
## Training procedure
|
44 |
|
45 |
+
The training hyperparameters are those suggested by Adelani et al. (2022) in their [code release](https://github.com/masakhane-io/lafand-mt), which gave their best results for machine translation of several African languages.
|
46 |
|
47 |
More specifically, we use the [example training script](https://github.com/huggingface/transformers/blob/674f750a57431222fa2832503a108df3badf1564/examples/pytorch/translation/run_translation.py) provided by 🤗 Transformers for fine-tuning mBART with the following command
|
48 |
|