Missing tokenizer.model and autogptq spit out gerrbish
#1
by
Yhyu13
- opened
Hi,
Just tried out this model, turns out not so successful.
It lack tokenizer.model so exllamv2 cannot load it, due to exllamav2 relies on setecepiece which requires tokenizer.model.
Next option is to load by autogptq, after resolving model config to be llama, the model spit out gerrbish. Seem there is till problem with tokenizer.
Forgive my ignorance, how is some model contains tokenizer.model, and others not? Does the existence of tokenizer.model affect GPTQ process?
Thanks!