Update README.md
Browse files
README.md
CHANGED
@@ -9,7 +9,7 @@ pipeline_tag: image-text-to-text
|
|
9 |
|
10 |
# Aria-sequential_mlp-bnb_nf4
|
11 |
BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and runs on a RTX 3090 and RTX 4060 Ti 16 GB.
|
12 |
-
Currently the model is not 5 GB sharded, as this seems to cause
|
13 |
|
14 |
### Installation
|
15 |
```
|
|
|
9 |
|
10 |
# Aria-sequential_mlp-bnb_nf4
|
11 |
BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and runs on a RTX 3090 and RTX 4060 Ti 16 GB.
|
12 |
+
Currently the model is not 5 GB sharded, as this seems to [cause problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
|
13 |
|
14 |
### Installation
|
15 |
```
|