Datasets:
Tasks:
Question Answering
Modalities:
Text
Formats:
csv
Languages:
Chinese
Size:
10K - 100K
License:
metadata
license: other
license_name: creative-commons-by-nc
task_categories:
- question-answering
language:
- zh
tags:
- traditional chinese
- finance
- medical
- taiwan
- benchmark
- zh-tw
- zh-hant
pretty_name: tmmlu++
size_categories:
- 100K<n<1M
TMMLU+ : Large scale traditional chinese massive multitask language understanding
We present TMMLU+ a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset with 66 subjects from elementary to professional level.
TMMLU+ dataset is 6 times larger and contains more balanced subjects compared to the previous version, TMMLU. We included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models of parameters ranging from 1.8B to 72B. Benchmark results show Traditional Chinese variants still lag behind those trained on Simplified Chinese major models.
Benchmark on direct prompting
model | STEM | Social Science | Humanities | Other | Average |
---|---|---|---|---|---|
Qwen/Qwen-72B | 61.12 | 71.65 | 63.00 | 61.31 | 64.27 |
Qwen/Qwen-14B | 46.94 | 56.69 | 49.43 | 48.81 | 50.47 |
Gemini-pro | 45.38 | 57.29 | 48.80 | 48.21 | 49.92 |
01-ai/Yi-34B-Chat | 40.24 | 56.77 | 53.99 | 47.58 | 49.64 |
Qwen/Qwen-14B-Chat | 43.86 | 53.29 | 44.78 | 45.13 | 46.77 |
01-ai/Yi-6B-Chat | 39.62 | 50.24 | 44.44 | 44.26 | 44.64 |
Claude-1.3 | 42.65 | 49.33 | 42.16 | 44.14 | 44.57 |
gpt-3.5-turbo-0613 | 41.56 | 46.72 | 36.73 | 42.03 | 41.76 |
CausalLM/14B | 39.83 | 44.50 | 39.61 | 41.97 | 41.48 |
Skywork/Skywork-13B-base | 36.93 | 47.27 | 41.04 | 40.10 | 41.33 |
Qwen/Qwen-7B | 37.53 | 45.48 | 38.09 | 38.96 | 40.01 |
Qwen/Qwen-7B-Chat | 33.32 | 44.64 | 40.27 | 39.89 | 39.53 |
vivo-ai/BlueLM-7B-Base | 33.94 | 41.52 | 37.38 | 38.74 | 37.90 |
baichuan-inc/Baichuan2-13B-Chat | 29.64 | 43.73 | 37.36 | 39.88 | 37.65 |
Qwen/Qwen-1_8B | 32.65 | 38.95 | 38.34 | 35.27 | 36.30 |
Claude-2 | 39.65 | 39.09 | 28.59 | 37.47 | 36.20 |
THUDM/chatglm3-6b | 31.05 | 39.31 | 35.64 | 35.60 | 35.40 |
deepseek-ai/deepseek-llm-7b-chat | 29.82 | 42.29 | 34.24 | 34.31 | 35.17 |
CausalLM/7B | 31.03 | 38.17 | 35.87 | 35.39 | 35.11 |
Azure99/blossom-v3_1-mistral-7b | 32.80 | 36.91 | 32.36 | 34.53 | 34.15 |
Qwen/Qwen-1_8B-Chat | 26.60 | 36.36 | 31.81 | 31.96 | 31.68 |
TigerResearch/tigerbot-13b-chat-v3 | 24.73 | 29.63 | 25.72 | 27.22 | 26.82 |
hongyin/mistral-7b-80k | 24.26 | 23.76 | 22.56 | 24.57 | 23.79 |
yentinglin/Taiwan-LLM-13B-v2.0-chat | 18.53 | 27.65 | 17.77 | 21.49 | 21.36 |
LinkSoul/Chinese-Llama-2-7b | 16.55 | 18.39 | 12.97 | 16.13 | 16.01 |
yentinglin/Taiwan-LLM-7B-v2.1-chat | 14.99 | 16.23 | 15.00 | 16.22 | 15.61 |
FlagAlpha/Atom-7B | 5.60 | 13.57 | 7.71 | 11.84 | 9.68 |