yukiarimo commited on
Commit
c192e1f
1 Parent(s): 7477f33

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -4
README.md CHANGED
@@ -136,10 +136,10 @@ The Yuna AI model was trained on a massive dataset containing diverse topics. Th
136
  ## Provided files
137
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
138
  | ---- | ---- | ---- | ---- | ---- | ----- |
139
- | [yuna-ai-v3-q3_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q3_k_m.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
140
- | [yuna-ai-v3-q4_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q4_k_m.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
141
- | [yuna-ai-v3-q5_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q5_k_m.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
142
- | [yuna-ai-v3-q6_k.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q6_k.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
143
 
144
  > Note: The above RAM figures assume there is no GPU offloading. If layers are offloaded to the GPU, RAM usage will be reduced, and VRAM will be used instead.
145
 
 
136
  ## Provided files
137
  | Name | Quant method | Bits | Size | Max RAM required | Use case |
138
  | ---- | ---- | ---- | ---- | ---- | ----- |
139
+ | [yuna-ai-v2-q3_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q3_k_m.gguf) | Q3_K_M | 3 | 3.30 GB| 5.80 GB | very small, high quality loss |
140
+ | [yuna-ai-v2-q4_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q4_k_m.gguf) | Q4_K_M | 4 | 4.08 GB| 6.58 GB | medium, balanced quality - recommended |
141
+ | [yuna-ai-v2-q5_k_m.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q5_k_m.gguf) | Q5_K_M | 5 | 4.78 GB| 7.28 GB | large, very low quality loss - recommended |
142
+ | [yuna-ai-v2-q6_k.gguf](https://huggingface.co/yukiarimo/yuna-ai-v2/blob/main/yuna-ai-v2-q6_k.gguf) | Q6_K | 6 | 5.53 GB| 8.03 GB | very large, extremely low quality loss |
143
 
144
  > Note: The above RAM figures assume there is no GPU offloading. If layers are offloaded to the GPU, RAM usage will be reduced, and VRAM will be used instead.
145