Update README.md
Browse files
README.md
CHANGED
@@ -5,22 +5,24 @@ pipeline_tag: text-generation
|
|
5 |
# Model Card for Breeze-7B-Instruct-v0.1
|
6 |
|
7 |
|
8 |
-
Breeze-7B is a language model that builds upon the foundation of Mistral-7B, specifically
|
9 |
-
enhanced for Traditional Chinese. This model introduces an expanded vocabulary with additional 30,000 Traditional Chinese tokens,
|
10 |
-
significantly improving its performance in understanding and generating Traditional Chinese text. As a result,
|
11 |
-
the model is twice as efficient in the encoding and decoding of Traditional Chinese compared to Mistral-7B.
|
12 |
|
|
|
|
|
|
|
|
|
13 |
|
14 |
-
|
15 |
-
|
16 |
-
|
17 |
-
sharpen its capabilities. Breeze-7B-Instruct-v0.1 demonstrates impressive performance in benchmarks for both English and Traditional Chinese, surpassing the results of
|
18 |
Taiwan-LLM-7B-v2.1-chat, Taiwan-LLM-13B-v2.0-chat and Qwen-7B-chat in Traditional Chinese assessments. It also excels in some benchmarks against Yi-6B-Chat.
|
19 |
-
In English evaluations, Breeze-7B-Instruct-v0.1 shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks.
|
20 |
-
|
21 |
|
22 |
|
23 |
-
[Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is an extension to
|
|
|
|
|
|
|
24 |
|
25 |
|
26 |
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
|
@@ -110,6 +112,10 @@ All inferences run on 2 RTX A6000 GPUs (using `vllm`, with a tensor-parallel siz
|
|
110 |
| Taiwan-LLM-13B-v2.0-base | 36.80 | 2.2k |
|
111 |
| Yi-34B | 43.71 | 4.5k |
|
112 |
|
|
|
|
|
|
|
|
|
113 |
## Examples
|
114 |
|
115 |
|
|
|
5 |
# Model Card for Breeze-7B-Instruct-v0.1
|
6 |
|
7 |
|
8 |
+
Breeze-7B is a language model that builds upon the foundation of Mistral-7B, specifically enhanced for Traditional Chinese.
|
|
|
|
|
|
|
9 |
|
10 |
+
[Breeze-7B-Base-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1) introduces an expanded vocabulary with additional 30,000 Traditional Chinese tokens and
|
11 |
+
is pre-trained on a substantial dataset of 250GB of Traditional Chinese content.
|
12 |
+
With the expanded vocabulary, the base model operates at twice the inference speed for Traditional Chinese characters compared to Mistral-7B. [See Inference Performance.]
|
13 |
+
This achievement marks a significant milestone as it is the first instance of vocabulary expansion in a model tailored for Traditional Chinese.
|
14 |
|
15 |
+
[Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1) derives from the base model [Breeze-7B-Base-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Base-v0.1)
|
16 |
+
and has undergone supervised fine-tuning with over 1 million instances to
|
17 |
+
sharpen its capabilities. This fine-tuned model demonstrates impressive performance in benchmarks for both English and Traditional Chinese, surpassing the results of
|
|
|
18 |
Taiwan-LLM-7B-v2.1-chat, Taiwan-LLM-13B-v2.0-chat and Qwen-7B-chat in Traditional Chinese assessments. It also excels in some benchmarks against Yi-6B-Chat.
|
19 |
+
In English evaluations, Breeze-7B-Instruct-v0.1 shows comparable results to Mistral-7B-Instruct-v0.1 on the MMLU and MT-Bench benchmarks. [See Chat Model Performance.]
|
|
|
20 |
|
21 |
|
22 |
+
[Breeze-7B-Instruct-64k-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is an extension to [Breeze-7B-Instruct-v0.1](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-v0.1)
|
23 |
+
to enable 64k
|
24 |
+
context length, which is equivalent to 88k Traditional Chinese characters. With minimal sacrifice in the performance of the regular benchmarks,
|
25 |
+
Breeze-7B-Instruct-64k-v0.1 can solve tasks such as question answering and summarization on document-level inputs. [See Long-context Performance.]
|
26 |
|
27 |
|
28 |
*A project by the members (in alphabetical order): Chan-Jan Hsu 許湛然, Chang-Le Liu 劉昶樂, Feng-Ting Liao 廖峰挺, Po-Chun Hsu 許博竣, Yi-Chang Chen 陳宜昌, and the supervisor Da-Shan Shiu 許大山.*
|
|
|
112 |
| Taiwan-LLM-13B-v2.0-base | 36.80 | 2.2k |
|
113 |
| Yi-34B | 43.71 | 4.5k |
|
114 |
|
115 |
+
## Long-context Performance
|
116 |
+
|
117 |
+
|
118 |
+
|
119 |
## Examples
|
120 |
|
121 |
|