Kquant03 commited on
Commit
c067908
1 Parent(s): 792b7a9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +13 -0
README.md CHANGED
@@ -14,6 +14,19 @@ tags:
14
 
15
  I finally figured out how to quantize FrankenMoE properly, so prepare for a flood of GGUF models from me. This one is scripted to be into whatever you're planning to do to it.
16
  Special thanks to [Cultrix](https://huggingface.co/CultriX) for the [base model](https://huggingface.co/CultriX/MistralTrix-v1).
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
18
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
19
 
 
14
 
15
  I finally figured out how to quantize FrankenMoE properly, so prepare for a flood of GGUF models from me. This one is scripted to be into whatever you're planning to do to it.
16
  Special thanks to [Cultrix](https://huggingface.co/CultriX) for the [base model](https://huggingface.co/CultriX/MistralTrix-v1).
17
+
18
+ ## Provided files
19
+
20
+ | Name | Quant method | Bits | Size | Max RAM required | Use case |
21
+ | ---- | ---- | ---- | ---- | ---- | ----- |
22
+ | [Q2_K Tiny](https://huggingface.co/Kquant03/MistralTrix-4x9B-ERP-GGUF/blob/main/ggml-model-q2_k.gguf) | Q2_K | 2 | 10 GB| 12 GB | smallest, significant quality loss - not recommended for most purposes |
23
+ | [Q3_K_M](https://huggingface.co/Kquant03/MistralTrix-4x9B-ERP-GGUF/blob/main/ggml-model-q3_k_m.gguf) | Q3_K_M | 3 | 13.1 GB| 15.1 GB | very small, high quality loss |
24
+ | [Q4_0](https://huggingface.co/Kquant03/MistralTrix-4x9B-ERP-GGUF/blob/main/ggml-model-q4_0.gguf) | Q4_0 | 4 | 17 GB| 19 GB | legacy; small, very high quality loss - prefer using Q3_K_M |
25
+ | [Q4_K_M](https://huggingface.co/Kquant03/MistralTrix-4x9B-ERP-GGUF/blob/main/ggml-model-q4_k_m.gguf) | Q4_K_M | 4 | ~17 GB| ~19 GB | medium, balanced quality - recommended |
26
+ | [Q5_0](https://huggingface.co/Kquant03/MistralTrix-4x9B-ERP-GGUF/blob/main/ggml-model-q5_0.gguf) | Q5_0 | 5 | 20.7 GB| 22.7 GB | legacy; large, balanced quality |
27
+ | [Q5_K_M](https://huggingface.co/Kquant03/MistralTrix-4x9B-ERP-GGUF/blob/main/ggml-model-q5_k_m.gguf) | Q5_K_M | 5 | ~20.7 GB| ~22.7 GB | large, balanced quality - recommended |
28
+ | [Q6 XL](https://huggingface.co/Kquant03/MistralTrix-4x9B-ERP-GGUF/blob/main/ggml-model-q6_k.gguf) | Q6_K | 6 | 24.7 GB| 26.7 GB | very large, extremely low quality loss |
29
+ | [Q8 XXL](https://huggingface.co/Kquant03/MistralTrix-4x9B-ERP-GGUF/blob/main/ggml-model-q8_0.gguf) | Q8_0 | 8 | 32 GB| 34 GB | very large, extremely low quality loss - not recommended |
30
  # "[What is a Mixture of Experts (MoE)?](https://huggingface.co/blog/moe)"
31
  ### (from the MistralAI papers...click the quoted question above to navigate to it directly.)
32