Update README.md
Browse files
README.md
CHANGED
@@ -12,14 +12,14 @@ Breeze-7B is a language model that builds upon the foundation of [Mistral-7B-v0.
|
|
12 |
|
13 |
[Breeze-7B-Base-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) introduces an expanded vocabulary with additional 30,000 Traditional Chinese tokens and
|
14 |
is pre-trained on a substantial dataset of 250GB of Traditional Chinese content.
|
15 |
-
With the expanded vocabulary, the base model operates at twice the inference speed for Traditional Chinese characters compared to Mistral-7B. [See [Inference Performance](
|
16 |
This achievement marks a significant milestone as it is the first instance of vocabulary expansion in a model tailored for Traditional Chinese.
|
17 |
|
18 |
[Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) derives from the base model Breeze-7B-Base-v0.1
|
19 |
and has undergone supervised fine-tuning with over 1 million instances to
|
20 |
sharpen its capabilities. This fine-tuned model demonstrates impressive performance in benchmarks for both English and Traditional Chinese, surpassing the results of
|
21 |
Taiwan-LLM-7B-v2.1-chat, Taiwan-LLM-13B-v2.0-chat and Qwen-7B-chat in Traditional Chinese assessments. It also excels in some benchmarks against Yi-6B-Chat.
|
22 |
-
In English evaluations, Breeze-7B-Instruct-v0.1 shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. [See [Chat Model Performance](
|
23 |
|
24 |
|
25 |
[Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is an extension to Breeze-7B-Instruct-v0.1
|
|
|
12 |
|
13 |
[Breeze-7B-Base-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) introduces an expanded vocabulary with additional 30,000 Traditional Chinese tokens and
|
14 |
is pre-trained on a substantial dataset of 250GB of Traditional Chinese content.
|
15 |
+
With the expanded vocabulary, the base model operates at twice the inference speed for Traditional Chinese characters compared to Mistral-7B. [See [Inference Performance](#inference-performance).]
|
16 |
This achievement marks a significant milestone as it is the first instance of vocabulary expansion in a model tailored for Traditional Chinese.
|
17 |
|
18 |
[Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) derives from the base model Breeze-7B-Base-v0.1
|
19 |
and has undergone supervised fine-tuning with over 1 million instances to
|
20 |
sharpen its capabilities. This fine-tuned model demonstrates impressive performance in benchmarks for both English and Traditional Chinese, surpassing the results of
|
21 |
Taiwan-LLM-7B-v2.1-chat, Taiwan-LLM-13B-v2.0-chat and Qwen-7B-chat in Traditional Chinese assessments. It also excels in some benchmarks against Yi-6B-Chat.
|
22 |
+
In English evaluations, Breeze-7B-Instruct-v0.1 shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. [See [Chat Model Performance](#chat-model-performance).]
|
23 |
|
24 |
|
25 |
[Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is an extension to Breeze-7B-Instruct-v0.1
|