Acknowledgement

#1
by syubraj - opened

First of all thank you for creating gguf file for the model I have been working on. However, I cannot be able to run it locally, although I have created a model pulling gguf file and using ollama.

The error says: "missing result_output tensor"

image.png

The model might simply be broken, or not supported by llama.cpp

Which one do you suggest then to make the model running? or would you check from your side, if possible.

works fine for me in llama.cpp (to the extend I can test it, my nepali is notoriosuly bad :)

$ llama-cli -m RomanEng2Nep-v2.Q4_K_S.gguf -p muskuraudai
कुंजलिसहित [end of text]

mradermacher changed discussion status to closed

Sign up or log in to comment