7B Mistral Merges
Collection
A collection of my 7B parameter merges
โข
3 items
โข
Updated
Multiparadigm_7B is a merge of the following models:
Thanks to mradermacher, static GGUF quants are available here.
slices:
- sources:
- model: MTSAIR/multi_verse_model
layer_range: [0, 32]
- model: ResplendentAI/Paradigm_7B
layer_range: [0, 32]
merge_method: slerp
base_model: MTSAIR/multi_verse_model
parameters:
t:
- filter: self_attn
value: [0, 0.6, 0.3, 0.7, 1]
- filter: mlp
value: [1, 0.6, 0.7, 0.3, 0]
- value: 0.6
dtype: bfloat16
Detailed results can be found here
Metric | Value |
---|---|
Avg. | 76.08 |
AI2 Reasoning Challenge (25-Shot) | 73.21 |
HellaSwag (10-Shot) | 88.95 |
MMLU (5-Shot) | 64.28 |
TruthfulQA (0-shot) | 76.87 |
Winogrande (5-shot) | 83.82 |
GSM8k (5-shot) | 69.37 |