Konstanta Series RP Models (successful variants)
Collection
Successfull variants of experimental merge series Konstanta models. They are pretty good!
•
3 items
•
Updated
•
1
This is a merge of pre-trained language models created using mergekit.
Alright, so, this model seems to be REALLY good. Konstanta-7B is pretty good either, but this one is still marginally better.
This model was merged using the DARE TIES merge method using Inv/Konstanta-7B as a base.
The following models were included in the merge:
The following YAML configuration was used to produce this model:
merge_method: dare_ties
dtype: bfloat16
parameters:
int8_mask: true
base_model: Inv/Konstanta-7B
models:
- model: Inv/Konstanta-7B
- model: KatyTheCutie/LemonadeRP-4.5.3
parameters:
density: 0.65
weight: [0.65, 0.40, 0.35, 0.30, 0.35, 0.40, 0.25]
- model: senseable/WestLake-7B-v2
parameters:
density: 0.85
weight: [0.25, 0.40, 0.35, 0.30, 0.35, 0.40, 0.65]
Base model
Inv/Konstanta-7B