Update README.md
Browse files
README.md
CHANGED
@@ -41,8 +41,7 @@ This repository provides large language models developed by [LLM-jp](https://llm
|
|
41 |
|**Pre-trained models**|
|
42 |
| [llm-jp-13b-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-v1.0) |
|
43 |
| [llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) |
|
44 |
-
Checkpoints format:
|
45 |
-
|
46 |
|
47 |
## Required Libraries and Their Versions
|
48 |
|
@@ -95,8 +94,8 @@ print(tokenizer.decode(output))
|
|
95 |
|
96 |
## Tokenizer
|
97 |
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
98 |
-
The
|
99 |
-
Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for
|
100 |
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
|
101 |
- **Training algorithm:** SentencePiece Unigram byte-fallback
|
102 |
- **Training data:** A subset of the datasets for model pre-training
|
@@ -107,7 +106,7 @@ Please refer to [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-
|
|
107 |
|
108 |
### Pre-training
|
109 |
|
110 |
-
The models have been pre-trained
|
111 |
|
112 |
| Language | Dataset | Tokens|
|
113 |
|:---:|:---:|:---:|
|
@@ -117,7 +116,8 @@ The models have been pre-trained on approximately 287.5B tokens, sourced from a
|
|
117 |
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
|
118 |
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
|
119 |
|
120 |
-
|
|
|
121 |
|
122 |
### Instruction tuning
|
123 |
|
@@ -151,4 +151,4 @@ llm-jp(at)nii.ac.jp
|
|
151 |
## Model Card Authors
|
152 |
*The names are listed in alphabetical order.*
|
153 |
|
154 |
-
|
|
|
41 |
|**Pre-trained models**|
|
42 |
| [llm-jp-13b-v1.0](https://huggingface.co/llm-jp/llm-jp-13b-v1.0) |
|
43 |
| [llm-jp-1.3b-v1.0](https://huggingface.co/llm-jp/llm-jp-1.3b-v1.0) |
|
44 |
+
Checkpoints format: Hugging Face Transformers (Megatron-DeepSpeed format models are available [here](https://huggingface.co/llm-jp/llm-jp-13b-v1.0-mdsfmt))
|
|
|
45 |
|
46 |
## Required Libraries and Their Versions
|
47 |
|
|
|
94 |
|
95 |
## Tokenizer
|
96 |
The tokenizer of this model is based on [huggingface/tokenizers](https://github.com/huggingface/tokenizers) Unigram byte-fallback model.
|
97 |
+
The vocabulary entries were converted from [`llm-jp-tokenizer v2.1 (50k)`](https://github.com/llm-jp/llm-jp-tokenizer/releases/tag/v2.1).
|
98 |
+
Please refer to the [README.md](https://github.com/llm-jp/llm-jp-tokenizer) of `llm-ja-tokenizer` for details on the vocabulary construction procedure.
|
99 |
- **Model:** Hugging Face Fast Tokenizer using Unigram byte-fallback model which requires `tokenizers>=0.14.0`
|
100 |
- **Training algorithm:** SentencePiece Unigram byte-fallback
|
101 |
- **Training data:** A subset of the datasets for model pre-training
|
|
|
106 |
|
107 |
### Pre-training
|
108 |
|
109 |
+
The models have been pre-trained using a blend of the following data sets.
|
110 |
|
111 |
| Language | Dataset | Tokens|
|
112 |
|:---:|:---:|:---:|
|
|
|
116 |
||[The Pile](https://huggingface.co/datasets/EleutherAI/pile)|135B
|
117 |
|Codes|[The Stack](https://huggingface.co/datasets/bigcode/the-stack)|10B
|
118 |
|
119 |
+
The pre-training was continuously conducted using a total of 10 folds of non-overlapping data, each consisting of approximately 27-28B tokens.
|
120 |
+
We finalized the pre-training with additional (potentially) high-quality 27B tokens data obtained from the identical source data sets listed above used for the 10-fold data.
|
121 |
|
122 |
### Instruction tuning
|
123 |
|
|
|
151 |
## Model Card Authors
|
152 |
*The names are listed in alphabetical order.*
|
153 |
|
154 |
+
Hirokazu Kiyomaru, Hiroshi Matsuda, Jun Suzuki, Namgi Han, Saku Sugawara, Shota Sasaki, Shuhei Kurita, Taishi Nakamura, Takumi Okamoto.
|