GPT4All最新版加载llama3.1_8b_chinese_chat_q4_k_m.gguf 后python代码报错
#14
by
jumpfox
- opened
我从这里手动下载了llama3.1_8b_chinese_chat_q4_k_m.gguf 到GPT4All最新版中,在Chats中加载模型后可以正常聊天,但用如下代码:
from gpt4all import GPT4All
#model = GPT4All("qwen2-7b-instruct-q4_0.gguf")
model = GPT4All("llama3.1_8b_chinese_chat_q4_k_m.gguf")
#print(current_date_time)
print(model.generate("有哪些可以跨越大洲的洲际客机,各自有什么特点?"))
会报错如下:
llama_model_load: error loading model: error loading model vocabulary: unknown pre-tokenizer type: 'smaug-bpe'
llama_load_model_from_file_gpt4all: failed to load model
LLAMA ERROR: failed to load model from /Users/username/Library/Application Support/nomic.ai/GPT4All/llama3.1_8b_chinese_chat_q4_k_m.gguf
LLaMA ERROR: prompt won't work with an unloaded model!
请教如何解决?