|
--- |
|
task_categories: |
|
- translation |
|
language: |
|
- it |
|
- lld |
|
size_categories: |
|
- n<1K |
|
--- |
|
|
|
# Dataset Card: Testset 2 |
|
|
|
## Overview |
|
|
|
**Dataset Name**: Testset 2 |
|
|
|
**Source Paper**: ["Rule-Based, Neural and LLM Back-Translation: Comparative Insights from a Variant of Ladin"](https://arxiv.org/abs/2407.08819) |
|
|
|
**Description**: |
|
Testset 1 consists of parallel sentences in Ladin and Italian. The dataset contains two separate files with aligned sentences. Each line in the Ladin file corresponds to the same line number in the Italian file, providing a straightforward mapping between the languages. |
|
|
|
## Dataset Structure |
|
|
|
- **Files**: |
|
- `autonomia-lvb.txt`: Contains Ladin sentences, one per line. |
|
- `autonomia-ita.txt`: Contains the Italian translation of the corresponding Ladin sentences, one per line. |
|
|
|
## Format |
|
|
|
- **File Type**: Plain text |
|
- **Encoding**: UTF-8 |
|
- **Sentence Alignment**: 1-to-1 |
|
|
|
## Citation |
|
|
|
If you use this dataset, please cite the following paper: |
|
|
|
```bibtex |
|
@inproceedings{frontull-moser-2024-rule, |
|
title = "Rule-Based, Neural and {LLM} Back-Translation: Comparative Insights from a Variant of {L}adin", |
|
author = "Frontull, Samuel and |
|
Moser, Georg", |
|
editor = "Ojha, Atul Kr. and |
|
Liu, Chao-hong and |
|
Vylomova, Ekaterina and |
|
Pirinen, Flammie and |
|
Abbott, Jade and |
|
Washington, Jonathan and |
|
Oco, Nathaniel and |
|
Malykh, Valentin and |
|
Logacheva, Varvara and |
|
Zhao, Xiaobing", |
|
booktitle = "Proceedings of the The Seventh Workshop on Technologies for Machine Translation of Low-Resource Languages (LoResMT 2024)", |
|
month = aug, |
|
year = "2024", |
|
address = "Bangkok, Thailand", |
|
publisher = "Association for Computational Linguistics", |
|
url = "https://aclanthology.org/2024.loresmt-1.13", |
|
pages = "128--138", |
|
abstract = "This paper explores the impact of different back-translation approaches on machine translation for Ladin, specifically the Val Badia variant. Given the limited amount of parallel data available for this language (only 18k Ladin-Italian sentence pairs), we investigate the performance of a multilingual neural machine translation model fine-tuned for Ladin-Italian. In addition to the available authentic data, we synthesise further translations by using three different models: a fine-tuned neural model, a rule-based system developed specifically for this language pair, and a large language model. Our experiments show that all approaches achieve comparable translation quality in this low-resource scenario, yet round-trip translations highlight differences in model performance.", |
|
} |
|
``` |