Update README.md
Browse files
README.md
CHANGED
@@ -17,7 +17,7 @@ It is suitable for use if you have substantial fine-tuning data to tune it for y
|
|
17 |
[Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is a slightly modified version of
|
18 |
Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters.
|
19 |
|
20 |
-
The current release version of Breeze is v0.1.
|
21 |
|
22 |
Practicality-wise:
|
23 |
- Breeze expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
|
|
|
17 |
[Breeze-7B-Instruct-64k](https://huggingface.co/MediaTek-Research/Breeze-7B-Instruct-64k-v0.1) is a slightly modified version of
|
18 |
Breeze-7B-Instruct to enable a 64k-token context length. Roughly speaking, that is equivalent to 88k Traditional Chinese characters.
|
19 |
|
20 |
+
The current release version of Breeze-7B is v0.1.
|
21 |
|
22 |
Practicality-wise:
|
23 |
- Breeze expands the original vocabulary with additional 30,000 Traditional Chinese tokens. With the expanded vocabulary, everything else being equal, Breeze operates at twice the inference speed for Traditional Chinese to Mistral-7B and Llama 7B. [See [Inference Performance](#inference-performance).]
|