prompt template is wrong
looking here
https://huggingface.co/microsoft/Phi-3.5-mini-instruct
should be
<|system|>
You are a helpful assistant.<|end|>
<|user|>
How to explain Internet for a medieval knight?<|end|>
<|assistant|>
Yes but it can't be
In tokenizer_config.json they explicitly rstrip the tokens like <|system|> and <|user|> meaning the newlines disappear
I've raised it to Microsoft multiple times with no response, but I have to imagine it's intentional. Not sure why they post it with newlines or even include them in the chat template to begin with.
Interesting....have to test it more :)
Thanks
I do wonder if not rstrip-ing would result in better output, depends on how they trained i suppose
after testing with some math problems your prompting giving more accurate responses
prompt
If my BMI is 20.5 and my height is 172cm, how much would I weigh if I gained 5% of my current weight?
microsoft prompt answer 63. kg
your prompt nswer 63.12 kg
Most accurate answer is 63.68 kg
Quite good answer for tiny model.
In what percentage is water compressed at the bottom of the ocean in the Mariana Trench?
answer 4.99 % - very good answer., around 5 % is perfect.
I may try to make another quant without the rstrip as a test to see what happens
You can try .. I will test it :)
The correct prompt template should be
{{ if .System }}<|system|>
{{ .System }}<|end|>
{{ end }}{{ if .Prompt }}<|user|>
{{ .Prompt }}<|end|>
{{ end }}<|assistant|>
{{ .Response }}<|end|>
When i try to use this model with gpt4all and llamacpp and that prompt template, it never stops generating.
<|system|>
You are a helpful assistant. You think through questions completely and answer concisely<|end|>
<|user|>
%1<|end|>
<|assistant|>
%2<|end|>
This was my config with gpt4all, but it seems incorrect
It's interesting. If I run this model with ramalama it works fine. It's the only inference tool that i've gotten to work, even when I use your quant.
ramalama --nocontainer run 'huggingface://bartowski/Phi-3.5-mini-instruct-GGUF/Phi-3.5-mini-instruct-Q6_K_L.gguf'
works with good outputs. It's just using llama.cpp under the hood but even directly with llama.cpp i get bad outputs
I just got lucky on my first prompt. It spits out garbage.
I may try to make another quant without the rstrip as a test to see what happens
Bro, I merged your Q4_0_4_4 arm gguf with the tokenizer part of M$'s Phi-3-mini-4k-instruct Q4 gguf. https://huggingface.co/vonjack/Phi-3.5-mini-instruct-GGUF
With llama-server -c 4096 and the following prompt settings it works well (temp=0, top_k=-1, top_p=1, min_p=0.02).
Prompt template:
<|system|>
{{prompt}}<|end|>
{{history}}
<|{{char}}|>
Chat history template:
<|{{name}}|>
{{message}}<|end|>