fuzzy-mittenz/Sakura_Warding-Qw2.5-7B-Q4_K_M-GGUF
This model was converted to GGUF format from newsbang/Homer-v0.5-Qwen2.5-7B
using llama.cpp via the ggml.ai's GGUF-my-repo space.
Refer to the original model card for more details on the model.
Model Named for personal system use, after multiple Quants this turned out to be the most functional for me,
- Downloads last month
- 201
Model tree for fuzzy-mittenz/Sakura_Warding-Qw2.5-7B-Q4_K_M-GGUF
Base model
newsbang/Homer-v0.5-Qwen2.5-7B