File size: 3,926 Bytes
8e901a2 f067bfb b268b1d f067bfb 8e901a2 5693ee5 b268b1d f067bfb c66a031 b268b1d f067bfb b268b1d 172dde4 7ca57b6 f067bfb b268b1d 90fafdc 5693ee5 90fafdc 13a280b 90fafdc 5693ee5 90fafdc ceb026d f067bfb 5693ee5 a42897d 83a0604 bd8abd4 83a0604 5693ee5 bd8abd4 8d3fa14 f067bfb 8e901a2 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
TITLE = '<h1 align="center" id="space-title">Open Dutch LLM Evaluation Leaderboard</h1>'
INTRO_TEXT = f"""## About
This is a leaderboard for Dutch benchmarks for large language models.
This is a fork of the [Open Multilingual LLM Evaluation Leaderboard](https://huggingface.co/spaces/uonlp/open_multilingual_llm_leaderboard), but restricted to only Dutch models and augmented with additional model results.
We test the models on the following benchmarks **for the Dutch version only!!**, which have been translated into Dutch automatically by the original authors of the Open Multilingual LLM Evaluation Leaderboard with `gpt-35-turbo`.
I did not verify their translations and I do not maintain the datasets, I only run the benchmarks and add the results to this space. For questions regarding the test sets or running them yourself, see [the original Github repository](https://github.com/laiviet/lm-evaluation-harness).
<p align="center">
<a href="https://arxiv.org/abs/1803.05457" target="_blank">AI2 Reasoning Challenge </a> (25-shot) |
<a href="https://arxiv.org/abs/1905.07830" target="_blank">HellaSwag</a> (10-shot) |
<a href="https://arxiv.org/abs/2009.03300" target="_blank">MMLU</a> (5-shot) |
<a href="https://arxiv.org/abs/2109.07958" target="_blank">TruthfulQA</a> (0-shot)
</p>
"""
DISCLAIMER = """## Disclaimer
I did not verify the (translation) quality of the benchmarks. If you encounter issues with the benchmark contents, please contact the original authors.
I am aware that benchmarking models on *translated* data is not ideal. However, for Dutch there are no other options for generative models at the moment. Because the benchmarks were automatically translated, some translationese effects may occur: the translations may not be fluent Dutch or still contain artifacts of the source text (like word order, literal translation, certain vocabulary items). Because of that, an unfair advantage may be given to the non-Dutch models: Dutch is closely related to English, so if the benchmarks are in automatically translated Dutch that still has English properties, those English models may not have too many issues with that. If the benchmarks were to have been manually translated or, even better, created from scratch in Dutch, those non-Dutch models may have a harder time. Maybe not. We cannot know for sure until we have high-quality, manually crafted benchmarks for Dutch.
Another shortcoming is that we do not calculate significancy scores or confidence intervals. When results are close together in the leaderboard I therefore urge caution when interpreting the model ranks.
If you have any suggestions for other Dutch benchmarks, please [let me know](https://twitter.com/BramVanroy) so I can add them!
"""
CREDIT = f"""## Credit
This leaderboard has borrowed heavily from the following sources:
- Datasets (AI2_ARC, HellaSwag, MMLU, TruthfulQA)
- Evaluation code (EleutherAI's lm_evaluation_harness repo)
- Leaderboard code (Huggingface4's open_llm_leaderboard repo)
- The multilingual version of the leaderboard (uonlp's open_multilingual_llm_leaderboard repo)
"""
CITATION = """## Citation
If you use or cite the Dutch benchmark results or this specific leaderboard page, please cite the following paper:
Vanroy, B. (2023). *Language Resources for Dutch Large Language Modelling*. [https://arxiv.org/abs/2312.12852](https://arxiv.org/abs/2312.12852)
```bibtext
@article{vanroy2023language,
title={Language Resources for {Dutch} Large Language Modelling},
author={Vanroy, Bram},
journal={arXiv preprint arXiv:2312.12852},
year={2023}
}
```
If you use the multilingual benchmarks, please cite the following paper:
```bibtex
@misc{lai2023openllmbenchmark,
title={Open Multilingual {LLM} Evaluation Leaderboard},
author={Viet Lai and Nghia Trung Ngo and Amir Pouran Ben Veyseh and Franck Dernoncourt and Thien Huu Nguyen},
year={2023}
}
```
"""
|