Accuracy comparison with wizard-vicuna-13b and stable-vicuna-13b?
Thanks for all your contributions.
In your opinion, how does this model stack up in terms of accuracy compared to ggml versions of wizard-vicuna-13b and stable-vicuna-13b?
I made some tests and my observations :
pt4-x-vicuna-13B semms the worst of those 3.
wizard-vicuna-13b is the best but sometimes getting not straight answers then have to ask again.
stable-vicuna-13b you mean standard vicuna (vanila?)? - seems a bit better than pt4-x-vicuna-13B but worse from wizard-vicuna-13b
It does seem to be a bit worse at factual accuracy but its ability to play roles and do creative writing type tasks seems to be far better.
I made some tests and my observations :
pt4-x-vicuna-13B semms the worst of those 3.
wizard-vicuna-13b is the best but sometimes getting not straight answers then have to ask again.
stable-vicuna-13b you mean standard vicuna (vanila?)? - seems a bit better than pt4-x-vicuna-13B but worse from wizard-vicuna-13b
Are you using the proper format? It works best with Alpaca format, not Vicuna
Of course .
How I could use the wrong format ?
Vicuna q5_1 work properly with llama.cpp
Of course .
How I could use the wrong format ?
Vicuna q5_1 work properly with llama.cpp
There is a User/Assistant format for vicuna, and a
Instruction:
### Response:format for alpaca. Ours uses the latter
Yep ... like apla a or alpaca-lora format
Nothing new :)