--- base_model: - SanjiWatsuki/Kunoichi-DPO-v2-7B - Epiculous/Fett-uccine-7B library_name: transformers tags: - mergekit - merge - alpaca - mistral license: other --- Thanks to @Epiculous for the dope model/ help with llm backends and support overall. Id like to also thank @kalomaze for the dope sampler additions to ST. @SanjiWatsuki Thank you very much for the help, and the model! ST users can find the TextGenPreset in the folder labeled so. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/642265bc01c62c1e4102dc36/9obNSalcJqCilQwr_4ssM.jpeg) Quants:Thank you @bartowski! https://huggingface.co/bartowski/Kunocchini-exl2 and @jeiku https://huggingface.co/jeiku/Konocchini-7B_GGUF Thanks to @konz00 for the additional GGUF Quants: https://huggingface.co/konz00/Kunocchini-7b-GGUF The following models were included in the merge: * [SanjiWatsuki/Kunoichi-DPO-v2-7B](https://huggingface.co/SanjiWatsuki/Kunoichi-DPO-v2-7B) * [Epiculous/Fett-uccine-7B](https://huggingface.co/Epiculous/Fett-uccine-7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml slices: - sources: - model: SanjiWatsuki/Kunoichi-DPO-v2-7B layer_range: [0, 32] - model: Epiculous/Fett-uccine-7B layer_range: [0, 32] merge_method: slerp base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B parameters: t: - filter: self_attn value: [0, 0.5, 0.3, 0.7, 1] - filter: mlp value: [1, 0.5, 0.7, 0.3, 0] - value: 0.5 dtype: bfloat16 ```