Quantizing moondream?
I would love to include this as a captioner and prompt tool (perhaps with some fine-tuning for prompting) in SD.Next, but we need a quantized version to keep memory usage as low as is workable. Is there anything special that needs to be done to do that to moondream2 or just vision models in general?
I look forward to trying moondream2 out! Thank you for your efforts!
Sadly it's not supported by Llama.cpp yet :(
I was wondering if it's possible to use bitsandbytes with this model? What about bettertransformer? And what about flash attention 2? I understand that it's not currently quantized with llama.cpp because it's so new, but if the architecture supports it, I'd like to be able to use bitsandbytes and/or bettertransformer at least...Here's a sample script of how I use those with llava:
if chosen_model == 'llava' and chosen_quant == 'float16':
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
resume_download=True
).to(device)
elif chosen_model == 'llava' and chosen_quant == '8-bit':
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
load_in_8bit=True,
resume_download=True
)
elif chosen_model == 'llava' and chosen_quant == '4-bit':
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float32,
low_cpu_mem_usage=True,
load_in_4bit=True,
resume_download=True
)
elif chosen_model == 'bakllava' and chosen_quant == 'float16':
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
resume_download=True
).to(device)
elif chosen_model == 'bakllava' and chosen_quant == '8-bit':
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
load_in_8bit=True,
resume_download=True
)
elif chosen_model == 'bakllava' and chosen_quant == '4-bit':
model = LlavaForConditionalGeneration.from_pretrained(
model_id,
torch_dtype=torch.float32,
low_cpu_mem_usage=True,
load_in_4bit=True,
resume_download=True
)
Did you try this code with Moondream?
I did but I didn't see any difference in vram usage...it didn't throw an error though...I tried 4-bit and 8-bit both...Do you have any experience with this sort of thing?
Seeing how, as I understand it, this is multiple models merged together, what if you quantized them first and then merged them? Or at least quantized the biggest part?
I have integration with llama.cpp implemented locally but there's a bug in my image encoder implementation somewhere - just need to figure out how to debug it. Will look into Flash Attention and bitsandbytes as well shortly.
Can you link any resources you are following, Would love to help
I did but I didn't see any difference in vram usage...it didn't throw an error though...I tried 4-bit and 8-bit both...Do you have any experience with this sort of thing?
I dont actual, can you share the complete code that you executed
I dont actual, can you share the complete code that you executed
I tried different variations of instantiating the "model" using the above bitsandbytes configurations:
class loader_moondream:
def initialize_model_and_tokenizer(self):
device = get_best_device()
model = AutoModelForCausalLM.from_pretrained("vikhyatk/moondream2",
trust_remote_code=True,
revision="2024-03-05",
torch_dtype=torch.float16,
low_cpu_mem_usage=True,
resume_download=True).to(device)
tokenizer = AutoTokenizer.from_pretrained("vikhyatk/moondream2", revision="2024-03-05")
return model, tokenizer, device
def moondream_process_images(self):
script_dir = os.path.dirname(__file__)
image_dir = os.path.join(script_dir, "Docs_for_DB")
documents = []
allowed_extensions = ['.png', '.jpg', '.jpeg', '.bmp', '.gif', '.tif', '.tiff']
image_files = [file for file in os.listdir(image_dir) if os.path.splitext(file)[1].lower() in allowed_extensions]
if not image_files:
return []
model, tokenizer, device = self.initialize_model_and_tokenizer()
total_start_time = time.time()
with tqdm(total=len(image_files), unit="image") as progress_bar:
for file_name in image_files:
full_path = os.path.join(image_dir, file_name)
try:
with Image.open(full_path) as raw_image:
enc_image = model.encode_image(raw_image)
summary = model.answer_question(enc_image, "Describe in detail what this image depicts in as much detail as possible.", tokenizer)
extracted_metadata = extract_image_metadata(full_path)
document = Document(page_content=summary, metadata=extracted_metadata)
documents.append(document)
progress_bar.update(1)
except Exception as e:
print(f"{file_name}: Error processing image - {e}")
total_end_time = time.time()
total_time_taken = total_end_time - total_start_time
print(f"Total image processing time: {total_time_taken:.2f} seconds")
del model
del tokenizer
if torch.cuda.is_available():
torch.cuda.empty_cache()
gc.collect()
return documents