sydneyfong
commited on
Commit
•
130c798
1
Parent(s):
48929bb
Update README.md
Browse files
README.md
CHANGED
@@ -24,8 +24,17 @@ Apparently the model supports function calling as well if you supply a more elab
|
|
24 |
|
25 |
Due to resource limitations we only have a select handful of quantizations. Hopefully they are useful for your purposes.
|
26 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
27 |
## Legal / License
|
28 |
|
29 |
*"Built with glm-4"*
|
30 |
|
31 |
I just copied the LICENSE file from https://huggingface.co/THUDM/glm-4-9b-chat as required for redistribution.
|
|
|
|
24 |
|
25 |
Due to resource limitations we only have a select handful of quantizations. Hopefully they are useful for your purposes.
|
26 |
|
27 |
+
- MD5 (glm4-9b-chat-IQ3_S.gguf) = d6f4f51c5c4e7d3e8c1d93044fd92b9d
|
28 |
+
- MD5 (glm4-9b-chat-Q4_K_M.gguf) = 9514ec1112b3e2a47cac52179d796c84
|
29 |
+
- MD5 (glm4-9b-chat-Q4_K_S.gguf) = 38f48ddf4dc5f6845d070de5d1c3e4c6
|
30 |
+
- MD5 (glm4-9b-chat-Q5_K_M.gguf) = 99717a90672ea7cf34f0ea23cff47c8a
|
31 |
+
- MD5 (glm4-9b-chat-Q5_K_S.gguf) = b720b3cb4c5190bd36eac26f385e979b
|
32 |
+
- MD5 (glm4-9b-chat-Q6_K.gguf) = b8a36cf46408ec558d471c38e55989c1
|
33 |
+
- MD5 (glm4-9b-chat-Q8_0.gguf) = 1e2aea60e7c9453d560738f6bc06885e
|
34 |
+
|
35 |
## Legal / License
|
36 |
|
37 |
*"Built with glm-4"*
|
38 |
|
39 |
I just copied the LICENSE file from https://huggingface.co/THUDM/glm-4-9b-chat as required for redistribution.
|
40 |
+
|