Muhammadreza
commited on
Commit
•
0bea04a
1
Parent(s):
81906ca
Update README.md
Browse files
README.md
CHANGED
@@ -48,6 +48,12 @@ More information about this on the inference sections.
|
|
48 |
|
49 |
If you want to use 4 bit quantization, we have a PEFT for you [here](https://huggingface.co/MaralGPT/MaralGPT-Mistral-7B-v-0-1). Also, you can find _Google Colab_ notebooks [here](https://github.com/prp-e/maralgpt).
|
50 |
|
|
|
|
|
|
|
|
|
|
|
|
|
51 |
### Inference on a big GPU
|
52 |
|
53 |
### Inference on a small GPU (Consumer Hardware/Free Colab)
|
|
|
48 |
|
49 |
If you want to use 4 bit quantization, we have a PEFT for you [here](https://huggingface.co/MaralGPT/MaralGPT-Mistral-7B-v-0-1). Also, you can find _Google Colab_ notebooks [here](https://github.com/prp-e/maralgpt).
|
50 |
|
51 |
+
### Installing Libraries
|
52 |
+
|
53 |
+
```pip install transformers accelerate bitsandbytes```
|
54 |
+
|
55 |
+
_NOTE_: `bitsandbytes` library is only needed for 8 bit version. Otherwise, it's not necessary.
|
56 |
+
|
57 |
### Inference on a big GPU
|
58 |
|
59 |
### Inference on a small GPU (Consumer Hardware/Free Colab)
|