Gokuldaskumar commited on
Commit
93a6bb1
1 Parent(s): 0ed9d69

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +42 -0
README.md ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language:
3
+ - en
4
+ license: other
5
+ tags:
6
+ - chat
7
+ - llama-cpp
8
+ - gguf-my-repo
9
+ license_name: tongyi-qianwen
10
+ license_link: https://huggingface.co/Qwen/Qwen1.5-32B-Chat/blob/main/LICENSE
11
+ pipeline_tag: text-generation
12
+ ---
13
+
14
+ # Gokuldaskumar/Qwen1.5-32B-Chat-Q4_0-GGUF
15
+ This model was converted to GGUF format from [`Qwen/Qwen1.5-32B-Chat`](https://huggingface.co/Qwen/Qwen1.5-32B-Chat) using llama.cpp via the ggml.ai's [GGUF-my-repo](https://huggingface.co/spaces/ggml-org/gguf-my-repo) space.
16
+ Refer to the [original model card](https://huggingface.co/Qwen/Qwen1.5-32B-Chat) for more details on the model.
17
+ ## Use with llama.cpp
18
+
19
+ Install llama.cpp through brew.
20
+
21
+ ```bash
22
+ brew install ggerganov/ggerganov/llama.cpp
23
+ ```
24
+ Invoke the llama.cpp server or the CLI.
25
+
26
+ CLI:
27
+
28
+ ```bash
29
+ llama-cli --hf-repo Gokuldaskumar/Qwen1.5-32B-Chat-Q4_0-GGUF --model qwen1.5-32b-chat.Q4_0.gguf -p "The meaning to life and the universe is"
30
+ ```
31
+
32
+ Server:
33
+
34
+ ```bash
35
+ llama-server --hf-repo Gokuldaskumar/Qwen1.5-32B-Chat-Q4_0-GGUF --model qwen1.5-32b-chat.Q4_0.gguf -c 2048
36
+ ```
37
+
38
+ Note: You can also use this checkpoint directly through the [usage steps](https://github.com/ggerganov/llama.cpp?tab=readme-ov-file#usage) listed in the Llama.cpp repo as well.
39
+
40
+ ```
41
+ git clone https://github.com/ggerganov/llama.cpp && cd llama.cpp && make && ./main -m qwen1.5-32b-chat.Q4_0.gguf -n 128
42
+ ```