thwin27 commited on
Commit
e6bd092
1 Parent(s): 163b4d2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -8,7 +8,7 @@ pipeline_tag: image-text-to-text
8
  ---
9
 
10
  # Aria-sequential_mlp-bnb_nf4
11
- BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and runs on a RTX 3090.
12
  Currently the model is not 5 GB sharded, as this seems to cause [problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
13
 
14
  ### Installation
@@ -28,7 +28,7 @@ torch.cuda.set_device(0)
28
 
29
  model_id_or_path = "thwin27/Aria-sequential_mlp-bnb_nf4"
30
 
31
- model = AutoModelForCausalLM.from_pretrained(model_id_or_path, device_map="auto", torch_dtype=torch.bfloat16, trust_remote_code=True)
32
  processor = AutoProcessor.from_pretrained(model_id_or_path, trust_remote_code=True)
33
 
34
  image_path = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
@@ -50,7 +50,7 @@ inputs = processor(text=text, images=image, return_tensors="pt")
50
  inputs["pixel_values"] = inputs["pixel_values"].to(model.dtype)
51
  inputs = {k: v.to(model.device) for k, v in inputs.items()}
52
 
53
- with torch.inference_mode(), torch.cuda.amp.autocast(dtype=torch.bfloat16):
54
  output = model.generate(
55
  **inputs,
56
  max_new_tokens=500,
 
8
  ---
9
 
10
  # Aria-sequential_mlp-bnb_nf4
11
+ BitsAndBytes NF4 quantization from [Aria-sequential_mlp](https://huggingface.co/rhymes-ai/Aria-sequential_mlp), requires about 13.8 GB of VRAM and runs on a RTX 3090 and RTX 4060 Ti 16 GB.
12
  Currently the model is not 5 GB sharded, as this seems to cause [problems](https://stackoverflow.com/questions/79068298/valueerror-supplied-state-dict-for-layers-does-not-contain-bitsandbytes-an) when loading serialized BNB models. This might make it impossible to load the model in free-tier Colab.
13
 
14
  ### Installation
 
28
 
29
  model_id_or_path = "thwin27/Aria-sequential_mlp-bnb_nf4"
30
 
31
+ model = AutoModelForCausalLM.from_pretrained(model_id_or_path, torch_dtype=torch.bfloat16, trust_remote_code=True)
32
  processor = AutoProcessor.from_pretrained(model_id_or_path, trust_remote_code=True)
33
 
34
  image_path = "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png"
 
50
  inputs["pixel_values"] = inputs["pixel_values"].to(model.dtype)
51
  inputs = {k: v.to(model.device) for k, v in inputs.items()}
52
 
53
+ with torch.inference_mode(), torch.amp.autocast("cuda", dtype=torch.bfloat16):
54
  output = model.generate(
55
  **inputs,
56
  max_new_tokens=500,