Vulkan
Trying Q4_K_M & Q4_K_S with Koboldcpp 1.56, I'm unable to offload any layers using Vulkan without the program exiting with an error,
llm_load_tensors: offloaded 5/33 layers to GPU
llm_load_tensors: CPU buffer size = 7000.37 MiB
llm_load_tensors: Vulkan buffer size = 1057.73 MiB
...
GGML_ASSERT: ggml-vulkan.cpp:2738: src1 == nullptr || ggml_vk_dim01_contiguous(src1)
Could not attach to process. If your uid matches the uid of the target
process, check the setting of /proc/sys/kernel/yama/ptrace_scope, or try
again as the root user. For more details, see /etc/sysctl.d/10-ptrace.conf
ptrace: Operation not permitted.
No stack.
The program is not being run.
While I am able to use Vulkan with 0 layers set in the quick launcher using Proctora-Q4_K_S/M,
I can offload 24 layers with 8192 context on llama2-13b-psyfighter.Q4_K_M, for example,
and 12 layers with 8192 context on mergemonster-13b-20231124.Q5_K_M.
The vulkan backend doesn't support mixtral (or mixtral-like) models yet. Use CUDA/ROCm or OpenCL for the time being!
I discovered this page almost by accident.
I am very grateful for your work.