GGUF wen?

#3
by AIGUYCONTENT - opened

many thanks

This one is highly unlikely. Even Llama 3.2 90B (vision) hasn't been adapted to gguf. See, Llama.cpp at it's core does no longer support multimodality. The wrappers (like ollama or the server version) sometimes do. But they need to adapt it ba hand. This why unfortunately so many great vision models aren't adapted to gguf.

By the way, I'd like it to be different. Just don't hold your breath for it to change immedietly.

How many 4090 or 5090 do we need to run this?

How many 4090 or 5090 do we need to run this?

You still have 2 kidneys?

This one is highly unlikely. Even Llama 3.2 90B (vision) hasn't been adapted to gguf. See, Llama.cpp at it's core does no longer support multimodality. The wrappers (like ollama or the server version) sometimes do. But they need to adapt it ba hand. This why unfortunately so many great vision models aren't adapted to gguf.

By the way, I'd like it to be different. Just don't hold your breath for it to change immedietly.

Thanks. So I only have 136GB of VRAM. That means I cannot run the main model.

https://huggingface.co/models?search=nvlm

That leaves the fp8 (which I cannot run on a 3090) and the NF4—which appears to be a work in progress: https://huggingface.co/SeanScripts/NVLM-D-72B-nf4

Could you guys (or someone else) create a quant that would only need 70GB to 100 GB of VRAM to run?

I wanted to be able to run the NVIDIA model on my AMD GPU for lols :(

At least I have mistral nemo and nemotron mini I guess lol

Sign up or log in to comment