- Model: OPUS-MT
- Tested on: Tatoeba
- Metric:
- bleu(tensorflow),
- sacrebleu(github->mjpost),
- google_bleu(nltk),
- rouge(google-research),
- meteor(nltk),
- ter(university of Maryland)
- Retrieved from: Huggingface metrics
- Script used for translation and testing: https://gitlab.com/hmtkvs/machine_translation/-/tree/production-stable
Info
mtdata-OPUS Tatoeba (length=14178, single reference)
bleu : 0.5228
sacrebleu : 0.5652
google_bleu : 0.5454
rouge-mid : precision=0.7792, recall=0.7899, f_measure=0.7796
meteor : 0.7557
ter : score=0.3003, num_edits= 24654, ref_length= 82079.0
OPUS Tatoeba (length = 5000, multi references)
bleu : 0.5165
sacrebleu : 0.7098
google_bleu : 0.5397
rouge-mid : precision=0.9965, recall=0.5021, f_measure=0.6665
meteor : 0.3344
ter : score: 0.6703, 'num_edits': 38883, 'ref_length': 58000.0