JSL-MedPhi2
Thanks for sharing these GGUF's! Could you please also do the MedPhi2 version as it seems to perform better? https://huggingface.co/johnsnowlabs/JSL-MedPhi2-2.7B
a well-performing 2.7b. wow. Should be available in a few hours, if nothing goes wrong.
Unfortunately the pretokenizer is not supported at the moment:
WARNING:hf-to-gguf:** chkhsh: fcace8b9cac38ce847670c970cd5892031a753a1ef381abd1d9af00f713da085
If I knew what the correct pretokenizer is (and if its supported by llama.cpp) I can set it manually, but I don't know it.
Thanks anyway
If you have the time, could you please report the logs at llama.cpp's github, or post here the logs and steps to reproduce the error so that they can hopefully fix it?
running convert-hf-to-gguf.py on the repository will do it. it's not an error in llama.cpp (or the model) - it's simply that nobody has written support for it yet.