Edit model card

Molmo-7B-O BnB 4bit quant

30GB -> 7GB

approx. 12GB VRAM required

base model for more information:

https://huggingface.co/allenai/Molmo-7B-O-0924

example code:

https://github.com/cyan2k/molmo-7b-bnb-4bit

performance metrics & benchmarks to compare with base will follow over the next week

Downloads last month
891
Safetensors
Model size
4.35B params
Tensor type
F32
·
U8
·
Inference Examples
Inference API (serverless) does not yet support model repos that contain custom code.

Model tree for cyan2k/molmo-7B-O-bnb-4bit

Quantized
(2)
this model

Spaces using cyan2k/molmo-7B-O-bnb-4bit 2