Upload folder using huggingface_hub
Browse files
.gitattributes
CHANGED
@@ -40,3 +40,8 @@ s3nh-Mistral-7B-Evol-Instruct-Chinese.Q6_K.gguf filter=lfs diff=lfs merge=lfs -t
|
|
40 |
s3nh-Mistral-7B-Evol-Instruct-Chinese.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
s3nh-Mistral-7B-Evol-Instruct-Chinese.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
s3nh-Mistral-7B-Evol-Instruct-Chinese.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
|
|
|
|
|
|
40 |
s3nh-Mistral-7B-Evol-Instruct-Chinese.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
41 |
s3nh-Mistral-7B-Evol-Instruct-Chinese.Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
|
42 |
s3nh-Mistral-7B-Evol-Instruct-Chinese.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
43 |
+
mistral-7b-evol-instruct-chinese.Q3_K_S.gguf filter=lfs diff=lfs merge=lfs -text
|
44 |
+
mistral-7b-evol-instruct-chinese.Q4_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
45 |
+
mistral-7b-evol-instruct-chinese.Q5_K_M.gguf filter=lfs diff=lfs merge=lfs -text
|
46 |
+
mistral-7b-evol-instruct-chinese.Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
|
47 |
+
mistral-7b-evol-instruct-chinese.Q8_0.gguf filter=lfs diff=lfs merge=lfs -text
|
README.md
CHANGED
@@ -31,17 +31,13 @@ The key difference between GGJT and GGUF is the use of a key-value structure for
|
|
31 |
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
|
32 |
inference or for identifying the model.
|
33 |
|
34 |
-
### Perplexity params
|
35 |
-
|
36 |
-
Model Measure Q2_K Q3_K_S Q3_K_M Q3_K_L Q4_0 Q4_1 Q4_K_S Q4_K_M Q5_0 Q5_1 Q5_K_S Q5_K_M Q6_K Q8_0 F16
|
37 |
-
7B perplexity 6.7764 6.4571 6.1503 6.0869 6.1565 6.0912 6.0215 5.9601 5.9862 5.9481 5.9419 5.9208 5.9110 5.9070 5.9066
|
38 |
-
13B perplexity 5.8545 5.6033 5.4498 5.4063 5.3860 5.3608 5.3404 5.3002 5.2856 5.2706 5.2785 5.2638 5.2568 5.2548 5.2543
|
39 |
-
|
40 |
|
41 |
|
42 |
### inference
|
43 |
|
44 |
|
45 |
-
|
|
|
|
|
46 |
|
47 |
# Original model card
|
|
|
31 |
This allows for new metadata to be added without breaking compatibility with existing models, and to annotate the model with additional information that may be useful for
|
32 |
inference or for identifying the model.
|
33 |
|
|
|
|
|
|
|
|
|
|
|
|
|
34 |
|
35 |
|
36 |
### inference
|
37 |
|
38 |
|
39 |
+
User: Tell me story about what is an quantization and what do we need to build.
|
40 |
+
|
41 |
+
Me: Ok, you can see the video [https://youtu.be/q8GhYRlQ1dU](https://youtu.be/q8GhYRlQ1dU) I did yesterday, it may help you understand.
|
42 |
|
43 |
# Original model card
|
mistral-7b-evol-instruct-chinese.Q3_K_S.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:4d1da6f1cde51c8f780ee3f5634bd94e399ff409b6dce3734e503f8a25ba9bbd
|
3 |
+
size 3164567264
|
mistral-7b-evol-instruct-chinese.Q4_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:7294f40d531cc7519265c44e986a20153c2b52a485c62d6a0a6add4bcb3c7565
|
3 |
+
size 4368439008
|
mistral-7b-evol-instruct-chinese.Q5_K_M.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:12455a5219df99d8db5f6f09d89d707e4633e2bb8cbbe8723eef990d1ba009f8
|
3 |
+
size 5131409120
|
mistral-7b-evol-instruct-chinese.Q6_K.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:b1234788cd40528fe9dfe0cfd67a3b5f4562c0232335d9984e825efe26ec07dd
|
3 |
+
size 5942064864
|
mistral-7b-evol-instruct-chinese.Q8_0.gguf
ADDED
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
1 |
+
version https://git-lfs.github.com/spec/v1
|
2 |
+
oid sha256:9ca9e1adfe12da142f8f7efee8f6a0802486a5514039c8e53298bf987e945694
|
3 |
+
size 7695857376
|