Unable to Load GGUF Models?
#2
by
nadolsw
- opened
Thanks so much for making these quantized models!
I have a question, I am attempting to load the model but encountering the following error:
import torch
from transformers import AutoTokenizer
from ctransformers import AutoModelForCausalLM
nunerBase_model_name = "bartowski/NuExtract-v1.5-GGUF"
nunerBase_model = AutoModelForCausalLM.from_pretrained(nunerBase_model_name, model_file="NuExtract-v1.5-Q4_K_M.gguf")
Cell In[10], line 3
nunerBase_model = AutoModelForCausalLM.from_pretrained(nunerBase_model_name, model_file="NuExtract-v1.5-Q4_K_M.gguf")
File .../lib/python3.11/site-packages/ctransformers/hub.py:175 in from_pretrained
llm = LLM(
File .../lib/python3.11/site-packages/ctransformers/llm.py:253 in __init__
raise RuntimeError(
RuntimeError: Failed to create LLM 'gguf' from '.../huggingface/hub/models--bartowski--NuExtract-v1.5-GGUF/blobs/0db8d385745db71a5760feefa81dc1e99b38e3d243d0a5cced12a499e0ec6457'.
I also tried referencing the file locally with the same result
nunerBase_filepath = ".../quantized/nuextract-v1.5/NuExtract-v1.5-Q4_K_M.gguf"
nunerBase_model = AutoModelForCausalLM.from_pretrained(nunerBase_filepath)
Cell In[11], line 5
nunerBase_model = AutoModelForCausalLM.from_pretrained(nunerBase_filepath)
File .../lib/python3.11/site-packages/ctransformers/hub.py:175 in from_pretrained
llm = LLM(
File .../lib/python3.11/site-packages/ctransformers/llm.py:253 in __init__
raise RuntimeError(
RuntimeError: Failed to create LLM 'gguf' from '.../quantized/nuextract-v1.5/NuExtract-v1.5-Q4_K_M.gguf'.
Curious if you have any suggestions, possibly related but I had to uninstall and reinstall ctransformers due to having glibc 2.28 instead of v2.29
pip install ctransformers --no-binary ctransformers --no-cache-dir
Just trying to see if I'm doing something wrong or if it's likely an issue with my package install due to inability to install the latest version of ctransformers.
ctransformers hasn't been updated in over a year :( you'll have better luck with llama-cpp-python:
Gotcha, much appreciated!