runtime error
ding (…)chat.ggmlv3.q4_0.bin: 96%|█████████▌| 3.65G/3.79G [01:51<00:04, 29.5MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 97%|█████████▋| 3.67G/3.79G [01:51<00:03, 32.5MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 97%|█████████▋| 3.68G/3.79G [01:51<00:03, 35.9MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 98%|█████████▊| 3.70G/3.79G [01:52<00:02, 39.2MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 98%|█████████▊| 3.71G/3.79G [01:53<00:03, 23.4MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 98%|█████████▊| 3.72G/3.79G [01:53<00:02, 26.6MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 98%|█████████▊| 3.73G/3.79G [01:54<00:02, 20.3MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 99%|█████████▊| 3.74G/3.79G [01:55<00:02, 17.0MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 99%|█████████▉| 3.75G/3.79G [01:55<00:01, 20.7MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 99%|█████████▉| 3.76G/3.79G [01:56<00:01, 21.6MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 100%|█████████▉| 3.79G/3.79G [01:56<00:00, 23.7MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 100%|██████████| 3.79G/3.79G [01:57<00:00, 24.7MB/s] Downloading (…)chat.ggmlv3.q4_0.bin: 100%|██████████| 3.79G/3.79G [01:57<00:00, 32.4MB/s] gguf_init_from_file: invalid magic number 67676a74 error loading model: llama_model_loader: failed to load model from /home/user/.cache/huggingface/hub/models--TheBloke--Llama-2-7B-Chat-GGML/snapshots/76cd63c351ae389e1d4b91cab2cf470aab11864b/llama-2-7b-chat.ggmlv3.q4_0.bin llama_load_model_from_file: failed to load model Traceback (most recent call last): File "/home/user/app/app.py", line 55, in <module> llm2 = Llama(model_path=model_path) File "/home/user/.local/lib/python3.10/site-packages/llama_cpp/llama.py", line 365, in __init__ assert self.model is not None AssertionError
Container logs:
Fetching error logs...