gustavecortal
commited on
Commit
•
b14c6d6
1
Parent(s):
f0d25a0
Create README.md
Browse files
README.md
ADDED
@@ -0,0 +1,55 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
language: en
|
3 |
+
license: mit
|
4 |
+
tags:
|
5 |
+
- causal-lm
|
6 |
+
datasets:
|
7 |
+
- The Pile
|
8 |
+
---
|
9 |
+
|
10 |
+
### Quantized EleutherAI/gpt-neo-2.7B with 8-bit weights
|
11 |
+
|
12 |
+
|
13 |
+
This is a version of [BigScience's T0](https://huggingface.co/bigscience/T0_3B) with 3 billion parameters that is modified so you can generate **and fine-tune the model in colab or equivalent desktop gpu (e.g. single 1080Ti)**. Inspired by [GPT-J 8bit](https://huggingface.co/hivemind/gpt-j-6B-8bit).
|
14 |
+
|
15 |
+
Here's how to run it: [![colab](https://camo.githubusercontent.com/84f0493939e0c4de4e6dbe113251b4bfb5353e57134ffd9fcab6b8714514d4d1/68747470733a2f2f636f6c61622e72657365617263682e676f6f676c652e636f6d2f6173736574732f636f6c61622d62616467652e737667)](https://colab.research.google.com/drive/1ft6wQU0BhqG5PRlwgaZJv2VukKKjU4Es)
|
16 |
+
|
17 |
+
## Model Description
|
18 |
+
|
19 |
+
GPT-Neo 2.7B is a transformer model designed using EleutherAI's replication of the GPT-3 architecture. GPT-Neo refers to the class of models, while 2.7B represents the number of parameters of this particular pre-trained model.
|
20 |
+
|
21 |
+
|
22 |
+
## Links
|
23 |
+
|
24 |
+
* [EleutherAI](https://www.eleuther.ai)
|
25 |
+
* [Hivemind](https://training-transformers-together.github.io/)
|
26 |
+
* [Gustave Cortal](https://twitter.com/gustavecortal)
|
27 |
+
|
28 |
+
## BibTeX entry and citation info
|
29 |
+
|
30 |
+
To cite this model, use
|
31 |
+
```bibtex
|
32 |
+
@software{gpt-neo,
|
33 |
+
author = {Black, Sid and
|
34 |
+
Leo, Gao and
|
35 |
+
Wang, Phil and
|
36 |
+
Leahy, Connor and
|
37 |
+
Biderman, Stella},
|
38 |
+
title = {{GPT-Neo: Large Scale Autoregressive Language
|
39 |
+
Modeling with Mesh-Tensorflow}},
|
40 |
+
month = mar,
|
41 |
+
year = 2021,
|
42 |
+
note = {{If you use this software, please cite it using
|
43 |
+
these metadata.}},
|
44 |
+
publisher = {Zenodo},
|
45 |
+
version = {1.0},
|
46 |
+
doi = {10.5281/zenodo.5297715},
|
47 |
+
url = {https://doi.org/10.5281/zenodo.5297715}
|
48 |
+
}
|
49 |
+
|
50 |
+
@article{gao2020pile,
|
51 |
+
title={The Pile: An 800GB Dataset of Diverse Text for Language Modeling},
|
52 |
+
author={Gao, Leo and Biderman, Stella and Black, Sid and Golding, Laurence and Hoppe, Travis and Foster, Charles and Phang, Jason and He, Horace and Thite, Anish and Nabeshima, Noa and others},
|
53 |
+
journal={arXiv preprint arXiv:2101.00027},
|
54 |
+
year={2020}
|
55 |
+
}
|