Update README.md
Browse files
README.md
CHANGED
@@ -21,3 +21,33 @@ This model is licensed under the [CC BY-NC-SA 4.0 License](https://creativecommo
|
|
21 |
|
22 |
## Usage
|
23 |
To use this model for translation, you need to use the prefixes `>>ita<<` for translating to Italian and `>>lld_Latn<<` for translating to Ladin (Val Badia).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
21 |
|
22 |
## Usage
|
23 |
To use this model for translation, you need to use the prefixes `>>ita<<` for translating to Italian and `>>lld_Latn<<` for translating to Ladin (Val Badia).
|
24 |
+
|
25 |
+
## Citation
|
26 |
+
|
27 |
+
If you use this model, please cite the following paper:
|
28 |
+
|
29 |
+
```bibtex
|
30 |
+
@inproceedings{frontull-moser-2024-rule,
|
31 |
+
title = "Rule-Based, Neural and {LLM} Back-Translation: Comparative Insights from a Variant of {L}adin",
|
32 |
+
author = "Frontull, Samuel and
|
33 |
+
Moser, Georg",
|
34 |
+
editor = "Ojha, Atul Kr. and
|
35 |
+
Liu, Chao-hong and
|
36 |
+
Vylomova, Ekaterina and
|
37 |
+
Pirinen, Flammie and
|
38 |
+
Abbott, Jade and
|
39 |
+
Washington, Jonathan and
|
40 |
+
Oco, Nathaniel and
|
41 |
+
Malykh, Valentin and
|
42 |
+
Logacheva, Varvara and
|
43 |
+
Zhao, Xiaobing",
|
44 |
+
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)",
|
45 |
+
month = aug,
|
46 |
+
year = "2024",
|
47 |
+
address = "Bangkok, Thailand",
|
48 |
+
publisher = "Association for Computational Linguistics",
|
49 |
+
url = "https://aclanthology.org/2024.loresmt-1.13",
|
50 |
+
pages = "128--138",
|
51 |
+
abstract = "This paper explores the impact of different back-translation approaches on machine translation for Ladin, specifically the Val Badia variant. Given the limited amount of parallel data available for this language (only 18k Ladin-Italian sentence pairs), we investigate the performance of a multilingual neural machine translation model fine-tuned for Ladin-Italian. In addition to the available authentic data, we synthesise further translations by using three different models: a fine-tuned neural model, a rule-based system developed specifically for this language pair, and a large language model. Our experiments show that all approaches achieve comparable translation quality in this low-resource scenario, yet round-trip translations highlight differences in model performance.",
|
52 |
+
}
|
53 |
+
```
|