metadata
base_model:
- knifeayumu/Magnum-v4-Cydonia-v1.2-22B
language:
- en
license: other
license_name: mrl
license_link: https://mistral.ai/licenses/MRL-0.1.md
library_name: transformers
Llamacpp Quantizations of knifeayumu/Magnum-v4-Cydonia-v1.2-22B
Using llama.cpp release b3985 for quantization.
Original model: knifeayumu/Magnum-v4-Cydonia-v1.2-22B
Quant Types:
Filename | Quant type | File Size |
---|---|---|
Magnum-v4-Cydonia-v1.2-22B-F16.gguf | F16 | 44.5 GB |
Magnum-v4-Cydonia-v1.2-22B-Q8_0.gguf | Q8_0 | 23.6 GB |
Magnum-v4-Cydonia-v1.2-22B-Q6_K.gguf | Q6_K | 18.3 GB |
Magnum-v4-Cydonia-v1.2-22B-Q5_K_M.gguf | Q5_K_M | 15.7 GB |
Magnum-v4-Cydonia-v1.2-22B-Q5_K_S.gguf | Q5_K_S | 15.3 GB |
Magnum-v4-Cydonia-v1.2-22B-Q4_K_M.gguf | Q4_K_M | 13.3 GB |
Magnum-v4-Cydonia-v1.2-22B-Q4_K_S.gguf | Q4_K_S | 12.7 GB |
Magnum-v4-Cydonia-v1.2-22B-Q3_K_L.gguf | Q3_K_L | 11.7 GB |
Magnum-v4-Cydonia-v1.2-22B-Q3_K_M.gguf | Q3_K_M | 10.8 GB |
Magnum-v4-Cydonia-v1.2-22B-Q3_K_S.gguf | Q3_K_S | 9.64 GB |
Magnum-v4-Cydonia-v1.2-22B-Q2_K.gguf | Q2_K | 8.27 GB |
Magnum? More like Deagle (dies in cringe)
Cydonia-v1.2-Magnum-v4-22B but inverse... Some prefer anthracite-org/magnum-v4-22b over TheDrummer/Cydonia-22B-v1.2 so this merge is born.
This is a merge of pre-trained language models created using mergekit.
Merge Details
Merge Method
This model was merged using the SLERP merge method.
Models Merged
The following models were included in the merge:
Configuration
The following YAML configuration was used to produce this model:
models:
- model: anthracite-org/magnum-v4-22b
- model: TheDrummer/Cydonia-22B-v1.2
merge_method: slerp
base_model: anthracite-org/magnum-v4-22b
parameters:
t: [0.1, 0.3, 0.6, 0.3, 0.1]
dtype: bfloat16