Fixed a small typo.

#1
by qingy2019 - opened
Files changed (1) hide show
  1. README.md +3 -3
README.md CHANGED
@@ -46,7 +46,7 @@ Update context length settings and tokenizer
46
  | [Qwen2.5-14B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q5_K_M.gguf) | Q5_K_M | 10.51GB | false | High quality, *recommended*. |
47
  | [Qwen2.5-14B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q5_K_S.gguf) | Q5_K_S | 10.27GB | false | High quality, *recommended*. |
48
  | [Qwen2.5-14B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q4_K_L.gguf) | Q4_K_L | 9.57GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
49
- | [Qwen2.5-14B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q4_K_M.gguf) | Q4_K_M | 8.99GB | false | Good quality, default size for must use cases, *recommended*. |
50
  | [Qwen2.5-14B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 8.61GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
51
  | [Qwen2.5-14B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q4_K_S.gguf) | Q4_K_S | 8.57GB | false | Slightly lower quality with more space savings, *recommended*. |
52
  | [Qwen2.5-14B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q4_0.gguf) | Q4_0 | 8.54GB | false | Legacy format, generally not worth using over similarly sized formats |
@@ -127,8 +127,8 @@ The I-quants are *not* compatible with Vulcan, which is also AMD, so if you have
127
 
128
  ## Credits
129
 
130
- Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset
131
 
132
- Thank you ZeroWw for the inspiration to experiment with embed/output
133
 
134
  Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski
 
46
  | [Qwen2.5-14B-Instruct-Q5_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q5_K_M.gguf) | Q5_K_M | 10.51GB | false | High quality, *recommended*. |
47
  | [Qwen2.5-14B-Instruct-Q5_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q5_K_S.gguf) | Q5_K_S | 10.27GB | false | High quality, *recommended*. |
48
  | [Qwen2.5-14B-Instruct-Q4_K_L.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q4_K_L.gguf) | Q4_K_L | 9.57GB | false | Uses Q8_0 for embed and output weights. Good quality, *recommended*. |
49
+ | [Qwen2.5-14B-Instruct-Q4_K_M.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q4_K_M.gguf) | Q4_K_M | 8.99GB | false | Good quality, default size for most use cases, *recommended*. |
50
  | [Qwen2.5-14B-Instruct-Q3_K_XL.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q3_K_XL.gguf) | Q3_K_XL | 8.61GB | false | Uses Q8_0 for embed and output weights. Lower quality but usable, good for low RAM availability. |
51
  | [Qwen2.5-14B-Instruct-Q4_K_S.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q4_K_S.gguf) | Q4_K_S | 8.57GB | false | Slightly lower quality with more space savings, *recommended*. |
52
  | [Qwen2.5-14B-Instruct-Q4_0.gguf](https://huggingface.co/bartowski/Qwen2.5-14B-Instruct-GGUF/blob/main/Qwen2.5-14B-Instruct-Q4_0.gguf) | Q4_0 | 8.54GB | false | Legacy format, generally not worth using over similarly sized formats |
 
127
 
128
  ## Credits
129
 
130
+ Thank you kalomaze and Dampf for assistance in creating the imatrix calibration dataset.
131
 
132
+ Thank you ZeroWw for the inspiration to experiment with embed/output.
133
 
134
  Want to support my work? Visit my ko-fi page here: https://ko-fi.com/bartowski