GGUF version
#1
by
maria-ai
- opened
Can't make GGUF version. Is it possible?
https://huggingface.co/spaces/ggml-org/gguf-my-repo
ERROR:hf-to-gguf:Model LlavaForConditionalGeneration is not supported
Hi!
Try to use official repo with detailed instructions: https://github.com/ggerganov/llama.cpp/blob/master/examples/llava/README.md
I used it too. But i got errors (i tried to fix it but i can't):python examples/llava/llava-surgery-v2.py -m llava-saiga-8b
No tensors found. Is this a LLaVA model?
I see the problem
- Official repo use the legacy model organisation, i.e. its state dict stores vision tower under
model.vision_tower
and projector undermodel.projector
. You can see it inproj_criteria
method, for example. But there are plenty hard-coded names :( - Our implementation is synchronised with 🤗 and use other mapping in state dict. You can explore it with safetensor viewer on hub.
Therefore, if you want to convert this model to GGUF (and probably any other llava model on hf), you need to create your own llava-surgery
script that separate vision tower (Clip), project, and LM (LLaMA). And then convert each part to GGUF version.