--- base_model: - cognitivecomputations/dolphin-2.7-mixtral-8x7b - Sao10K/Sensualize-Mixtral-bf16 - jondurbin/bagel-dpo-8x7b-v0.2 - mistralai/Mixtral-8x7B-v0.1 - Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora - smelborp/MixtralOrochi8x7B - mistralai/Mixtral-8x7B-v0.1 library_name: transformers tags: - mergekit - merge --- # maid-yuzu-v4 This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). This model is a model that I merged with several models I know because I had leftover credits for merging. Of course, the results are not good. Please do not use it. ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) as a base. ### Models Merged The following models were included in the merge: * [cognitivecomputations/dolphin-2.7-mixtral-8x7b](https://huggingface.co/cognitivecomputations/dolphin-2.7-mixtral-8x7b) * [Sao10K/Sensualize-Mixtral-bf16](https://huggingface.co/Sao10K/Sensualize-Mixtral-bf16) * [jondurbin/bagel-dpo-8x7b-v0.2](https://huggingface.co/jondurbin/bagel-dpo-8x7b-v0.2) * [mistralai/Mixtral-8x7B-v0.1](https://huggingface.co/mistralai/Mixtral-8x7B-v0.1) + [Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora](https://huggingface.co/Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora) * [smelborp/MixtralOrochi8x7B](https://huggingface.co/smelborp/MixtralOrochi8x7B) ### Configuration The following YAML configuration was used to produce this model: ```yaml base_model: model: path: mistralai/Mixtral-8x7B-v0.1 dtype: bfloat16 merge_method: dare_ties slices: - sources: - layer_range: [0, 32] model: model: path: smelborp/MixtralOrochi8x7B parameters: density: 0.75 weight: 0.7 - layer_range: [0, 32] model: model: path: cognitivecomputations/dolphin-2.7-mixtral-8x7b parameters: density: 0.6 weight: 0.1 - layer_range: [0, 32] model: model: path: jondurbin/bagel-dpo-8x7b-v0.2 parameters: density: 0.6 weight: 0.1 - layer_range: [0, 32] model: lora: path: Doctor-Shotgun/limarp-zloss-mixtral-8x7b-qlora model: path: mistralai/Mixtral-8x7B-v0.1 parameters: density: 0.5 weight: 0.25 - layer_range: [0, 32] model: model: path: Sao10K/Sensualize-Mixtral-bf16 parameters: density: 0.5 weight: 0.2 - layer_range: [0, 32] model: model: path: mistralai/Mixtral-8x7B-v0.1 ```