Update README.md
Browse files
README.md
CHANGED
@@ -8,9 +8,16 @@ pipeline_tag: image-text-to-text
|
|
8 |
---
|
9 |
|
10 |
# Aria-sequential_mlp-bnb_nf4
|
11 |
-
BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and
|
12 |
Currently the model is not 5 GB sharded, as this seems to cause [problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
|
13 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
14 |
Run this model with:
|
15 |
``` python
|
16 |
import requests
|
@@ -59,6 +66,7 @@ print(result)
|
|
59 |
print(f'Max allocated memory: {torch.cuda.max_memory_allocated(device="cuda") / 1024 ** 3:.3f}GiB')
|
60 |
```
|
61 |
|
|
|
62 |
Quantization created with:
|
63 |
``` python
|
64 |
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|
|
|
8 |
---
|
9 |
|
10 |
# Aria-sequential_mlp-bnb_nf4
|
11 |
+
BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and runs on a RTX 3090.
|
12 |
Currently the model is not 5 GB sharded, as this seems to cause [problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
|
13 |
|
14 |
+
### Installation
|
15 |
+
```
|
16 |
+
pip install transformers==4.45.0 accelerate==0.34.1 sentencepiece==0.2.0 torchvision requests torch Pillow bitsandbytes
|
17 |
+
pip install flash-attn --no-build-isolation
|
18 |
+
```
|
19 |
+
|
20 |
+
### Inference
|
21 |
Run this model with:
|
22 |
``` python
|
23 |
import requests
|
|
|
66 |
print(f'Max allocated memory: {torch.cuda.max_memory_allocated(device="cuda") / 1024 ** 3:.3f}GiB')
|
67 |
```
|
68 |
|
69 |
+
### Quantization
|
70 |
Quantization created with:
|
71 |
``` python
|
72 |
from transformers import AutoModelForCausalLM, AutoTokenizer, BitsAndBytesConfig
|