Update license info to Apache 2.0
Browse files
README.md
CHANGED
@@ -12,7 +12,7 @@ license: apache-2.0
|
|
12 |
|
13 |
# π Falcon-40B
|
14 |
|
15 |
-
**Falcon-40B is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the
|
16 |
|
17 |
*Paper coming soon π.*
|
18 |
|
@@ -20,8 +20,8 @@ license: apache-2.0
|
|
20 |
|
21 |
* **It is the best open-source model currently available.** Falcon-40B outperforms [LLaMA](https://github.com/facebookresearch/llama), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
22 |
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
|
23 |
-
* **It is made available under a license allowing commercial use**,
|
24 |
-
|
25 |
β οΈ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct).
|
26 |
|
27 |
πΈ **Looking for a smaller, less expensive model?** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) is Falcon-40B's little brother!
|
@@ -67,7 +67,7 @@ for seq in sequences:
|
|
67 |
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
|
68 |
- **Model type:** Causal decoder-only;
|
69 |
- **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
|
70 |
-
- **License:**
|
71 |
|
72 |
### Model Source
|
73 |
|
@@ -223,11 +223,7 @@ Falcon-40B was trained a custom distributed training codebase, Gigatron. It uses
|
|
223 |
|
224 |
## License
|
225 |
|
226 |
-
Falcon-40B is made available under the
|
227 |
-
* You can freely use our models for research and/or personal purpose;
|
228 |
-
* You are allowed to share and build derivatives of these models, but you are required to give attribution and to share-alike with the same license;
|
229 |
-
* For commercial use, you are exempt from royalties payment if the attributable revenues are inferior to $1M/year, otherwise you should enter in a commercial agreement with TII.
|
230 |
-
|
231 |
|
232 |
## Contact
|
233 |
falconllm@tii.ae
|
|
|
12 |
|
13 |
# π Falcon-40B
|
14 |
|
15 |
+
**Falcon-40B is a 40B parameters causal decoder-only model built by [TII](https://www.tii.ae) and trained on 1,000B tokens of [RefinedWeb](https://huggingface.co/datasets/tiiuae/falcon-refinedweb) enhanced with curated corpora. It is made available under the Apache 2.0 license.**
|
16 |
|
17 |
*Paper coming soon π.*
|
18 |
|
|
|
20 |
|
21 |
* **It is the best open-source model currently available.** Falcon-40B outperforms [LLaMA](https://github.com/facebookresearch/llama), [StableLM](https://github.com/Stability-AI/StableLM), [RedPajama](https://huggingface.co/togethercomputer/RedPajama-INCITE-Base-7B-v0.1), [MPT](https://huggingface.co/mosaicml/mpt-7b), etc. See the [OpenLLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
|
22 |
* **It features an architecture optimized for inference**, with FlashAttention ([Dao et al., 2022](https://arxiv.org/abs/2205.14135)) and multiquery ([Shazeer et al., 2019](https://arxiv.org/abs/1911.02150)).
|
23 |
+
* **It is made available under a permissive Apache 2.0 license allowing for commercial use**, without any royalties or restrictions.
|
24 |
+
*
|
25 |
β οΈ **This is a raw, pretrained model, which should be further finetuned for most usecases.** If you are looking for a version better suited to taking generic instructions in a chat format, we recommend taking a look at [Falcon-40B-Instruct](https://huggingface.co/tiiuae/falcon-40b-instruct).
|
26 |
|
27 |
πΈ **Looking for a smaller, less expensive model?** [Falcon-7B](https://huggingface.co/tiiuae/falcon-7b) is Falcon-40B's little brother!
|
|
|
67 |
- **Developed by:** [https://www.tii.ae](https://www.tii.ae);
|
68 |
- **Model type:** Causal decoder-only;
|
69 |
- **Language(s) (NLP):** English, German, Spanish, French (and limited capabilities in Italian, Portuguese, Polish, Dutch, Romanian, Czech, Swedish);
|
70 |
+
- **License:** Apache 2.0 license.
|
71 |
|
72 |
### Model Source
|
73 |
|
|
|
223 |
|
224 |
## License
|
225 |
|
226 |
+
Falcon-40B is made available under the Apache 2.0 license.
|
|
|
|
|
|
|
|
|
227 |
|
228 |
## Contact
|
229 |
falconllm@tii.ae
|