Qwen2-VL-7B-GGUF / README.md
thomas-yanxin's picture
Update README.md
06b4a38 verified
---
license: apache-2.0
base_model:
- Qwen/Qwen2-VL-7B-Instruct
---
## How to use it
1. To get the Code:
```
git clone https://github.com/HimariO/llama.cpp.git
cd llama.cpp
git switch qwen2-vl
```
2. nano Makefile #(to add llama-qwen2vl-cli)
```
diff --git a/Makefile b/Makefile
index 8a903d7e..51403be2 100644
--- a/Makefile
+++ b/Makefile
@@ -1485,6 +1485,14 @@ libllava.a: examples/llava/llava.cpp \
$(OBJ_ALL)
$(CXX) $(CXXFLAGS) -static -fPIC -c $< -o $@ -Wno-cast-qual
+llama-qwen2vl-cli: examples/llava/qwen2vl-cli.cpp \
+ examples/llava/llava.cpp \
+ examples/llava/llava.h \
+ examples/llava/clip.cpp \
+ examples/llava/clip.h \
+ $(OBJ_ALL)
+ $(CXX) $(CXXFLAGS) $< $(filter-out %.h $<,$^) -o $@ $(LDFLAGS) -Wno-cast-qual
+
llama-llava-cli: examples/llava/llava-cli.cpp \
examples/llava/llava.cpp \
examples/llava/llava.h \
```
3. Metal Build
```
cmake . -DGGML_CUDA=ON -DCMAKE_CUDA_COMPILER=$(which nvcc) -DTCNN_CUDA_ARCHITECTURES=61
make -j35
```
4. RUN
```
./bin/llama-qwen2vl-cli -m ./thomas-yanxin/Qwen2-VL-7B-GGUF/Qwen2-VL-7B-GGUF-Q4_K_M.gguf --mmproj ./thomas-yanxin/Qwen2-VL-7B-GGUF/qwen2vl-vision.gguf -p "Describe the image" --image "./thomas-yanxin/Qwen2-VL-7B-GGUF/1.png"
```