AttributeError: 'LlamaCppModel' object has no attribute 'model'

#1
by TeaDiffusion - opened

I'm trying to run this model on text-generation-webui (Oobabooga) v1.16 (latest), I merged the two-part into one GGUF file, but I'm getting the following on the terminal:

12:12:56-793976 INFO Loading "Behemoth-123B_v1.1.gguf"
12:12:56-825463 INFO llama.cpp weights detected: "models/Behemoth-123B_v1.1.gguf"
llama_model_load: error loading model: invalid split file: models/Behemoth-123B_v1.1.gguf
llama_load_model_from_file: failed to load model
12:12:56-851090 ERROR Failed to load the model.
Traceback (most recent call last):
File "/home/user/Documents/software/text-generation-webui/modules/ui_model_menu.py", line 232, in load_model_wrapper
shared.model, shared.tokenizer = load_model(selected_model, loader)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/software/text-generation-webui/modules/models.py", line 93, in load_model
output = load_func_maploader
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/software/text-generation-webui/modules/models.py", line 278, in llamacpp_loader
model, tokenizer = LlamaCppModel.from_pretrained(model_file)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/user/Documents/software/text-generation-webui/modules/llamacpp_model.py", line 85, in from_pretrained
result.model = Llama(**params)
^^^^^^^^^^^^^^^
File "/home/user/Documents/software/text-generation-webui/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/llama.py", line 369, in init
internals.LlamaModel(
File "/home/user/Documents/software/text-generation-webui/installer_files/env/lib/python3.11/site-packages/llama_cpp_cuda/_internals.py", line 56, in init
raise ValueError(f"Failed to load model from file: {path_model}")
ValueError: Failed to load model from file: models/Behemoth-123B_v1.1.gguf

Exception ignored in: <function LlamaCppModel.__del__ at 0x7f0ef906f060>
Traceback (most recent call last):
File "/home/user/Documents/software/text-generation-webui/modules/llamacpp_model.py", line 33, in del
del self.model
^^^^^^^^^^
AttributeError: 'LlamaCppModel' object has no attribute 'model'

Other models like Nemotron/vanilla Llama runs alright, what's the problem with this one? I'd appreciate any help

Did you merge it using llama.cpp's llama-gguf-split?

You should be able to load it without merging the files as well as long as you point to part 1 and the parts are stored in the same directory (Though I haven't tested that personally with oobabooga)

Sign up or log in to comment