Update README.md
Browse filesUpdated model info
README.md
CHANGED
@@ -1,6 +1,6 @@
|
|
1 |
# π Falcon2-11B
|
2 |
|
3 |
-
**Falcon2-11B is a 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the Apache 2.0 license.**
|
4 |
|
5 |
*Paper coming soon π.*
|
6 |
|
@@ -22,7 +22,6 @@ pipeline = transformers.pipeline(
|
|
22 |
model=model,
|
23 |
tokenizer=tokenizer,
|
24 |
torch_dtype=torch.bfloat16,
|
25 |
-
trust_remote_code=True,
|
26 |
)
|
27 |
sequences = pipeline(
|
28 |
"Can you explain the concepts of Quantum Computing?",
|
@@ -47,8 +46,8 @@ For fast inference with Falcon, check-out [Text Generation Inference](https://gi
|
|
47 |
|
48 |
### Model Description
|
49 |
|
50 |
-
- **Developed by:** [https://www.tii.ae](https://www.tii.ae)
|
51 |
-
- **Model type:** Causal decoder-only
|
52 |
- **Language(s) (NLP):** English, German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish
|
53 |
- **License:** TII Falcon License 2.0
|
54 |
|
@@ -109,26 +108,26 @@ for seq in sequences:
|
|
109 |
|
110 |
### Training Data
|
111 |
|
112 |
-
Falcon2-11B was trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. It followed a
|
113 |
|
114 |
-
Overall, the data sources included RefinedWeb-English, Refined Web-Europe (
|
115 |
|
116 |
|
117 |
The training stages were as follows:
|
118 |
|
119 |
| **Stage** | **Context length** | **Tokens** |
|
120 |
|--------------|-----------------|-------------|
|
121 |
-
| Stage 1 | 2048 |
|
122 |
-
| Stage 2 | 4096 |
|
123 |
-
| Stage 3 | 8192 |
|
124 |
-
| Stage 4 | 8192 |
|
125 |
|
126 |
|
127 |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer.
|
128 |
|
129 |
### Training Procedure
|
130 |
|
131 |
-
Falcon2-11B was trained on 1024 A100 40GB GPUs, using a 3D parallelism strategy (TP=8, PP=1, DP=128) combined with ZeRO and Flash-Attention 2.
|
132 |
|
133 |
#### Training Hyperparameters
|
134 |
|
@@ -172,10 +171,8 @@ Falcon2-11B is a causal decoder-only model trained on a causal language modeling
|
|
172 |
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
|
173 |
|
174 |
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
|
175 |
-
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention ([Dao
|
176 |
-
* **Decoder-block:** parallel attention/MLP
|
177 |
-
|
178 |
-
For multiquery, we are using an internal variant which uses independent key and values per tensor parallel degree.
|
179 |
|
180 |
| **Hyperparameter** | **Value** | **Comment** |
|
181 |
|--------------------|-----------|----------------------------------------|
|
@@ -193,7 +190,7 @@ Falcon2-11B was trained on AWS SageMaker, using on average 1024 A100 40GB GPUs i
|
|
193 |
|
194 |
#### Software
|
195 |
|
196 |
-
Falcon2-11B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO
|
197 |
|
198 |
## Citation
|
199 |
|
|
|
1 |
# π Falcon2-11B
|
2 |
|
3 |
+
**Falcon2-11B is a 11B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. The model is made available under the TII Falcon License 2.0, the permissive Apache 2.0-based software license which includes an acceptable use policy that promotes the responsible use of AI.**
|
4 |
|
5 |
*Paper coming soon π.*
|
6 |
|
|
|
22 |
model=model,
|
23 |
tokenizer=tokenizer,
|
24 |
torch_dtype=torch.bfloat16,
|
|
|
25 |
)
|
26 |
sequences = pipeline(
|
27 |
"Can you explain the concepts of Quantum Computing?",
|
|
|
46 |
|
47 |
### Model Description
|
48 |
|
49 |
+
- **Developed by:** [https://www.tii.ae](https://www.tii.ae)
|
50 |
+
- **Model type:** Causal decoder-only
|
51 |
- **Language(s) (NLP):** English, German, Spanish, French, Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish
|
52 |
- **License:** TII Falcon License 2.0
|
53 |
|
|
|
108 |
|
109 |
### Training Data
|
110 |
|
111 |
+
Falcon2-11B was trained over 5,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb), a high-quality filtered and deduplicated web dataset which we enhanced with curated corpora. It followed a four stage training strategy. The first three stages were focused on increasing the context length, from to 2048 to 4096 and finally to 8192 tokens. The last stage aimed to further enhance performance using only high quality data.
|
112 |
|
113 |
+
Overall, the data sources included RefinedWeb-English, Refined Web-Europe (cs, de, es, fr, it, nl, pl, pt, ro, sv), high quality technical data, code data, and conversational data extracted from public sources.
|
114 |
|
115 |
|
116 |
The training stages were as follows:
|
117 |
|
118 |
| **Stage** | **Context length** | **Tokens** |
|
119 |
|--------------|-----------------|-------------|
|
120 |
+
| Stage 1 | 2048 | 4500 B |
|
121 |
+
| Stage 2 | 4096 | 250 B |
|
122 |
+
| Stage 3 | 8192 | 250 B |
|
123 |
+
| Stage 4 | 8192 | 500 B |
|
124 |
|
125 |
|
126 |
The data was tokenized with the Falcon-[7B](https://huggingface.co/tiiuae/falcon-7b)/[11B](https://huggingface.co/tiiuae/falcon-11B) tokenizer.
|
127 |
|
128 |
### Training Procedure
|
129 |
|
130 |
+
Falcon2-11B was trained on 1024 A100 40GB GPUs for the majority of the training, using a 3D parallelism strategy (TP=8, PP=1, DP=128) combined with ZeRO and Flash-Attention 2.
|
131 |
|
132 |
#### Training Hyperparameters
|
133 |
|
|
|
171 |
The architecture is broadly adapted from the GPT-3 paper ([Brown et al., 2020](https://arxiv.org/abs/2005.14165)), with the following differences:
|
172 |
|
173 |
* **Positionnal embeddings:** rotary ([Su et al., 2021](https://arxiv.org/abs/2104.09864));
|
174 |
+
* **Attention:** multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)) and FlashAttention-2 ([Dao, 2023](https://arxiv.org/abs/2307.08691));
|
175 |
+
* **Decoder-block:** parallel attention/MLP.
|
|
|
|
|
176 |
|
177 |
| **Hyperparameter** | **Value** | **Comment** |
|
178 |
|--------------------|-----------|----------------------------------------|
|
|
|
190 |
|
191 |
#### Software
|
192 |
|
193 |
+
Falcon2-11B was trained a custom distributed training codebase, Gigatron. It uses a 3D parallelism approach combined with ZeRO, high-performance Triton kernels and FlashAttention-2.
|
194 |
|
195 |
## Citation
|
196 |
|