lama cpp
#3
by
goodasdgood
- opened
C:\Users\ArabTech\Desktop\3\llama.cpp>llama-cli -m C:\Users\ArabTech\Desktop\3\llama-3.1-8b-lexi-uncensored-v2-q8_0.gguf -p "You are a helpful assistant" -cnv -ngl 33
warning: not compiled with GPU offload support, --gpu-layers option will be ignored
it run without gpu
why?
@goodasdgood
This is not a model issue but a llama.cpp topic. Check the instructions here for compiling with CUDA support:
https://github.com/ggerganov/llama.cpp/blob/master/docs/build.md
Orenguteng
changed discussion status to
closed