question
stringclasses
5 values
A
stringclasses
5 values
B
stringclasses
5 values
C
stringclasses
5 values
D
stringclasses
5 values
answer
stringclasses
2 values
某量販店推出滿 2 千送 2 百的活動,如果客戶單筆消費滿 2 千元以上,結帳時即印出當次消費金額 10%之現金抵用券,客戶於次月 15 日前再消費時,即可折抵消費之現金,逾期失效。 依過去經驗,有 80%抵用券會被客戶如期使用。該量販店當月單筆銷售金額滿 2 千者共計 5 千萬元,故共列印出 5 百萬元抵用券給客戶。下列敘述何者正確?
該量販店對當月滿 2 千元之銷貨收入僅能認列收入 4,630 萬元
不論次月客戶對現金抵用券實際使用是否等於 500 萬元,均不會影響次月之淨利數字
該量販店對當月滿 2 千元之銷貨收入應認列 5,000 萬元,但無需認列銷售費用,等待次月客戶實際折抵時,作為次月銷貨收入之減少
該量販店對當月滿 2 千元之銷貨收入應認列 5,000 萬元,但需另認列銷售費用 500 萬元
A
甲公司於 X2 年初以$1,100,000,另加$12,000 交易成本,發行 4 年期之可買回公司債,面額 $1,000,000,票面利率 5%,每年 12 月 31 日付息。甲公司得自 X3 年 12 月 31 日起,按面額 110 另加應計利息買回該公司債,發行時該買回權資產之公允價值經評估為$50,000。公司認為買回權與公司債兩者之經濟特性及風險並未緊密關聯,因此單獨認列嵌入式衍生工具。甲公司對公司債採攤銷後成本處理,則 X2 年初發行此可買回債券時公司債之入帳金額為何?
$1,062,000
$1,138,500
$1,161,500
$1,088,000
B
甲公司之員工於 X1年底於作業時發生意外,要求公司賠償其滿意金額,否則將對公司提起訴訟,甲公司正與該員工協商中。律師估計該員工提出訴訟之機率為30%,如果進行訴訟則法院有20%機率判決公司免賠,50%機率判決賠款100萬,30%機率判決賠款300萬元。甲公司於 X1年底之資產負債表應認列之訴訟負債準備為何?
100萬元
不用認列負債,僅需附註揭露
140萬元
42萬元
B
萬萬公司 X6 年與損益相關之資訊如下,推銷費用$20,000,兌換淨利$40,000,停業單位損失$30,000,銷貨收入$280,000,銷貨成本$200,000,試問萬萬公司本期淨利為何 (忽略所得稅影響)?
$70,000
$10,000
$100,000
$80,000
A
20X8 年初,媒體報導甲公司之產品因設計疏失,導致產品對使用者會產生傷害。甲公司已承認此一疏失,並願意免費更換產品以解除使用者之疑慮。已有律師事務所協助使用者對甲公司提起訴訟,並要求天價之賠償金,甲公司之法務部門已估算出很有可能須賠償之金額。甲公司於 20X8 年財務報表,有關此一訴訟事件之揭露及附註,何者錯誤?
若揭露相關資訊預期將嚴重損害甲公司之地位,仍應清楚說明訴訟案件之性質,以及經濟效益可能流出之金額及時點
若揭露相關資訊預期將不會嚴重損害甲公司之地位,無須揭露賠償金額之比較資訊
若揭露相關資訊預期將不會嚴重損害甲公司之地位,應揭露任何預期可由保險取得的歸墊金額
若揭露相關資訊預期將不會嚴重損害甲公司之地位,應揭露關於賠償金額之期初及期末帳面金額
A

TMMLU+ : Large scale traditional chinese massive multitask language understanding

A close-up image of a neat paper note with a white background. The text 'TMMLU+' is written horizontally across the center of the note in bold, black. Join us to work in multimodal LLM : https://ikala.ai/recruit/

We present TMMLU+, a traditional Chinese massive multitask language understanding dataset. TMMLU+ is a multiple-choice question-answering dataset featuring 66 subjects, ranging from elementary to professional level.

The TMMLU+ dataset is six times larger and contains more balanced subjects compared to its predecessor, TMMLU. We have included benchmark results in TMMLU+ from closed-source models and 20 open-weight Chinese large language models, with parameters ranging from 1.8B to 72B. The benchmark results show that Traditional Chinese variants still lag behind those trained on major Simplified Chinese models.

from datasets import load_dataset
task_list = [
             'engineering_math', 'dentistry', 'traditional_chinese_medicine_clinical_medicine', 'clinical_psychology', 'technical', 'culinary_skills', 'mechanical', 'logic_reasoning', 'real_estate',
             'general_principles_of_law', 'finance_banking', 'anti_money_laundering', 'ttqav2', 'marketing_management', 'business_management', 'organic_chemistry', 'advance_chemistry',
             'physics', 'secondary_physics', 'human_behavior', 'national_protection', 'jce_humanities', 'politic_science', 'agriculture', 'official_document_management',
             'financial_analysis', 'pharmacy', 'educational_psychology', 'statistics_and_machine_learning', 'management_accounting', 'introduction_to_law', 'computer_science', 'veterinary_pathology',
             'accounting', 'fire_science', 'optometry', 'insurance_studies', 'pharmacology', 'taxation', 'trust_practice', 'geography_of_taiwan', 'physical_education', 'auditing', 'administrative_law',
             'education_(profession_level)', 'economics', 'veterinary_pharmacology', 'nautical_science', 'occupational_therapy_for_psychological_disorders',
             'basic_medical_science', 'macroeconomics', 'trade', 'chinese_language_and_literature', 'tve_design', 'junior_science_exam', 'junior_math_exam', 'junior_chinese_exam',
             'junior_social_studies', 'tve_mathematics', 'tve_chinese_language', 'tve_natural_sciences', 'junior_chemistry', 'music', 'education', 'three_principles_of_people',
             'taiwanese_hokkien'
            ]
for task in task_list:
  val = load_dataset('ikala/tmmluplus', task)['validation']
  dev = load_dataset('ikala/tmmluplus', task)['train']
  test = load_dataset('ikala/tmmluplus', task)['test']

For each dataset split

for row in test:
  print(row)
  break
>> Dataset({
    features: ['question', 'A', 'B', 'C', 'D', 'answer'],
    num_rows: 11
})

Statistic on all four categories : STEM, Social Science, Humanities, Other

Category Test Dev Validation
STEM 3458 70 385
Social Sciences 5958 90 665
Humanities 1763 35 197
Other (Business, Health, Misc.) 8939 135 995
Total 20118 330 2242

Benchmark on direct prompting

model STEM Social Science Humanities Other Average
Gemini-1.5-pro 66.18 70.29 61.84 60.30 64.65
Qwen/Qwen-72B 61.12 71.65 63.00 61.31 64.27
gpt-4-0613 60.36 67.36 56.03 57.62 60.34
Qwen-max 59.92 66.95 57.43 56.48 60.20
Qwen/Qwen-72B-Chat 55.15 66.20 55.65 57.19 58.55
Qwen/Qwen-14B 46.94 56.69 49.43 48.81 50.47
Gemini-pro 45.38 57.29 48.80 48.21 49.92
01-ai/Yi-34B-Chat 40.24 56.77 53.99 47.58 49.64
Gemini-1.5-flash 53.47 53.42 42.99 46.56 49.11
Reka Flash 45.26 52.91 46.31 43.76 47.06
Qwen/Qwen-14B-Chat 43.86 53.29 44.78 45.13 46.77
Qwen/Qwen1.5-14B-Chat 39.65 52.76 43.90 44.95 45.31
01-ai/Yi-6B-Chat 39.62 50.24 44.44 44.26 44.64
Claude-1.3 42.65 49.33 42.16 44.14 44.57
MediaTek-Research/Breeze-7B-Instruct-v0_1 36.46 48.38 45.11 40.75 42.67
gpt-3.5-turbo-0613 41.56 46.72 36.73 42.03 41.76
CausalLM/14B 39.83 44.50 39.61 41.97 41.48
Skywork/Skywork-13B-base 36.93 47.27 41.04 40.10 41.33
Claude-3-opus 42.95 45.49 35.79 40.24 41.12
Qwen/Qwen-7B 37.53 45.48 38.09 38.96 40.01
meta-llama/Llama-3-70b-chat-hf 34.44 47.02 37.50 39.51 39.62
Qwen/Qwen-7B-Chat 33.32 44.64 40.27 39.89 39.53
vivo-ai/BlueLM-7B-Base 33.94 41.52 37.38 38.74 37.90
baichuan-inc/Baichuan2-13B-Chat 29.64 43.73 37.36 39.88 37.65
Qwen/Qwen-1_8B 32.65 38.95 38.34 35.27 36.30
Claude-2 39.65 39.09 28.59 37.47 36.20
THUDM/chatglm3-6b 31.05 39.31 35.64 35.60 35.40
deepseek-ai/deepseek-llm-7b-chat 29.82 42.29 34.24 34.31 35.17
CausalLM/7B 31.03 38.17 35.87 35.39 35.11
Azure99/blossom-v3_1-mistral-7b 32.80 36.91 32.36 34.53 34.15
google/gemma-7b-it 31.89 35.70 34.00 33.79 33.84
Reka Edge 30.02 39.40 31.84 32.36 33.41
microsoft/Orca-2-13b 24.69 39.18 33.60 31.99 32.37
Qwen/Qwen-1_8B-Chat 26.60 36.36 31.81 31.96 31.68
meta-llama/Llama-3-8b-chat-hf 31.52 34.19 28.91 31.79 31.60
TigerResearch/tigerbot-13b-chat-v3 24.73 29.63 25.72 27.22 26.82
hongyin/mistral-7b-80k 24.26 23.76 22.56 24.57 23.79
deepseek-ai/deepseek-llm-67b-chat 19.10 26.06 21.51 21.77 22.11
yentinglin/Taiwan-LLM-13B-v2.0-chat 18.53 27.65 17.77 21.49 21.36
GeneZC/MiniChat-3B 17.66 23.35 22.71 20.34 21.02
LinkSoul/Chinese-Llama-2-7b 16.55 18.39 12.97 16.13 16.01
yentinglin/Taiwan-LLM-7B-v2.1-chat 14.99 16.23 15.00 16.22 15.61
Claude-instant-1 12.52 17.13 15.10 13.57 14.58
FlagAlpha/Atom-7B 5.60 13.57 7.71 11.84 9.68

Results via ievals ( settings : 0-shot direct answering )

Citation

@article{ikala2024improved,
  title={An Improved Traditional Chinese Evaluation Suite for Foundation Model},
  author={Tam, Zhi-Rui and Pai, Ya-Ting and Lee, Yen-Wei and Cheng, Sega and Shuai, Hong-Han},
  journal={arXiv preprint arXiv:2403.01858},
  year={2024}
}
Downloads last month
7,124
Edit dataset card

Models trained or fine-tuned on ikala/tmmluplus