Update README.md
Browse files
README.md
CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: image-text-to-text
|
|
8 |
---
|
9 |
|
10 |
# Aria-sequential_mlp-bnb_nf4
|
11 |
-
BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and runs on a RTX 3090.
|
12 |
Currently the model is not 5 GB sharded, as this seems to cause [problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
|
13 |
|
14 |
### Installation
|
@@ -28,7 +28,7 @@ torch.cuda.set_device(0)
|
|
28 |
|
29 |
model_id_or_path = "thwin27/Aria-sequential_mlp-bnb_nf4"
|
30 |
|
31 |
-
model = AutoModelForCausalLM.from_pretrained(model_id_or_path,
|
32 |
processor = AutoProcessor.from_pretrained(model_id_or_path, trust_remote_code=True)
|
33 |
|
34 |
image_path = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
|
@@ -50,7 +50,7 @@ inputs = processor(text=text, images=image, return_tensors="pt")
|
|
50 |
inputs["pixel_values"] = inputs["pixel_values"].to(model.dtype)
|
51 |
inputs = {k: v.to(model.device) for k, v in inputs.items()}
|
52 |
|
53 |
-
with torch.inference_mode(), torch.
|
54 |
output = model.generate(
|
55 |
**inputs,
|
56 |
max_new_tokens=500,
|
|
|
8 |
---
|
9 |
|
10 |
# Aria-sequential_mlp-bnb_nf4
|
11 |
+
BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and runs on a RTX 3090 and RTX 4060 Ti 16 GB.
|
12 |
Currently the model is not 5 GB sharded, as this seems to cause [problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
|
13 |
|
14 |
### Installation
|
|
|
28 |
|
29 |
model_id_or_path = "thwin27/Aria-sequential_mlp-bnb_nf4"
|
30 |
|
31 |
+
model = AutoModelForCausalLM.from_pretrained(model_id_or_path, torch_dtype=torch.bfloat16, trust_remote_code=True)
|
32 |
processor = AutoProcessor.from_pretrained(model_id_or_path, trust_remote_code=True)
|
33 |
|
34 |
image_path = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
|
|
|
50 |
inputs["pixel_values"] = inputs["pixel_values"].to(model.dtype)
|
51 |
inputs = {k: v.to(model.device) for k, v in inputs.items()}
|
52 |
|
53 |
+
with torch.inference_mode(), torch.amp.autocast("cuda", dtype=torch.bfloat16):
|
54 |
output = model.generate(
|
55 |
**inputs,
|
56 |
max_new_tokens=500,
|