MT Sentinel Metrics
Collection
Machine Translation (MT) metrics designed explicitly to scrutinize the MT meta-evaluation process’s accuracy, robustness, and fairness.
•
7 items
•
Updated
•
5
This repository contains the SENTINELREF metric model trained on Direct Assessments (DA) annotations. For details on how to use our sentinel metric models, check our GitHub repository.
After having installed our repository package, you can use this model within Python in the following way:
from sentinel_metric import download_model, load_from_checkpoint
model_path = download_model("sapienzanlp/sentinel-ref-da")
model = load_from_checkpoint(model_path)
data = [
{"ref": "There's no place like home."},
{"ref": "Toto, I've a feeling we're not in Kansas anymore."}
]
output = model.predict(data, batch_size=8, gpus=1)
Output:
# Segment scores
>>> output.scores
[0.26086926460266113, 0.17275342345237732]
# System score
>>> output.system_score
0.21681134402751923
This work has been published at ACL 2024 (Main Conference). If you use any part, please consider citing our paper as follows:
@inproceedings{perrella-etal-2024-guardians,
title = "Guardians of the Machine Translation Meta-Evaluation: Sentinel Metrics Fall In!",
author = "Perrella, Stefano and Proietti, Lorenzo and Scir{\`e}, Alessandro and Barba, Edoardo and Navigli, Roberto",
editor = "Ku, Lun-Wei and Martins, Andre and Srikumar, Vivek",
booktitle = "Proceedings of the 62nd Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)",
month = aug,
year = "2024",
address = "Bangkok, Thailand", publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2024.acl-long.856",
pages = "16216--16244",
}
This work is licensed under Creative Commons Attribution-ShareAlike-NonCommercial 4.0.