zamal_

zamal

AI & ML interests

Anything that makes our life easy.

Organizations

Posts 2

view post
Post
35
πŸš€ New Model Release: zamal/Molmo-7B-GPTQ-4bit πŸš€

Hello lovely community,

zamal/Molmo-7B-GPTQ-4bit model is now available for all! This model has been highly quantized, reducing its size by almost six times. It now occupies significantly less space and vRAM, making it perfect for deployment on resource-constrained devices without compromising performance.

Now we get:
Efficient Performance: Maintains high accuracy while being highly quantized.
Reduced Size: The model size is reduced by nearly six times, optimizing storage and memory usage.
Versatile Application: Ideal for integrating a powerful visual language model into various projects particularly multi rag chains.
Check it out!

view post
Post
1277
Finally!
My first post for the lovely community out there!

Here's a highly quantized finetuned version of gemma focused exclusively on Prompt Engineering. Write as ambiguous you want and leave the job to this model

zamal/gemma-7b-finetuned