YC-Chen commited on
Commit
e938331
1 Parent(s): a039286

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +1 -1
README.md CHANGED
@@ -10,7 +10,7 @@ resulting in a doubling of the original tokenizer's inference speed.
10
  To the best of our knowledge, this is the first work on vocabulary expansion in TC.
11
  This model uses 250GB of TC data for continued pre-training and further uses over 1M instances for fine-tuning.
12
  Breeze-7B-Instruct-v0.1 performs well on both EN and TC benchmarks.
13
- This model outperforms Taiwan-LLM-7B-v2.1-chat, Taiwan-LLM-13B-v2.0-chat, and Yi-6B-Chat on the most TC benchmarks we tested
14
  and is comparable with Mistral-7B-Instruct-v0.1 on MMLU and MT-Bench in English.
15
 
16
  *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
 
10
  To the best of our knowledge, this is the first work on vocabulary expansion in TC.
11
  This model uses 250GB of TC data for continued pre-training and further uses over 1M instances for fine-tuning.
12
  Breeze-7B-Instruct-v0.1 performs well on both EN and TC benchmarks.
13
+ This model outperforms Taiwan-LLM-7B-v2.1-chat, Taiwan-LLM-13B-v2.0-chat, and Yi-6B-Chat on all TC benchmarks
14
  and is comparable with Mistral-7B-Instruct-v0.1 on MMLU and MT-Bench in English.
15
 
16
  *A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*