RoversX's picture
Duplicate from RoversX/llama2_7b_chat_unc-GGML
c6fa798
raw
history blame
264 Bytes
--extra-index-url https://pypi.ngc.nvidia.com
nvidia-cuda-runtime
nvidia-cublas
llama-cpp-python @ https://github.com/abetlen/llama-cpp-python/releases/download/v0.1.77/llama_cpp_python-0.1.77-cp310-cp310-manylinux_2_17_x86_64.manylinux2014_x86_64.whl
pyyaml
torch