--- base_model: - MrRobotoAI/Test002a - MrRobotoAI/Test001a - MrRobotoAI/Hathor - Aratako/Antler-7B-Novel-Writing - OmnicromsBrain/NeuralStar_Fusion-7B - FPHam/Writing_Partner_Mistral_7B - OmnicromsBrain/ToppyCox-7B - OmnicromsBrain/StoryFusion-7B - Aratako/SniffyOtter-7B-Novel-Writing-NSFW - FPHam/Autolycus-Mistral_7B - FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B - OmnicromsBrain/EverythingBagel-DPO-7B - OmnicromsBrain/Eros_Scribe-7b library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [DARE](https://arxiv.org/abs/2311.03099) [TIES](https://arxiv.org/abs/2306.01708) merge method using [MrRobotoAI/Test001a](https://huggingface.co/MrRobotoAI/Test001a) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Test002a](https://huggingface.co/MrRobotoAI/Test002a) * [MrRobotoAI/Hathor](https://huggingface.co/MrRobotoAI/Hathor) * [Aratako/Antler-7B-Novel-Writing](https://huggingface.co/Aratako/Antler-7B-Novel-Writing) * [OmnicromsBrain/NeuralStar_Fusion-7B](https://huggingface.co/OmnicromsBrain/NeuralStar_Fusion-7B) * [FPHam/Writing_Partner_Mistral_7B](https://huggingface.co/FPHam/Writing_Partner_Mistral_7B) * [OmnicromsBrain/ToppyCox-7B](https://huggingface.co/OmnicromsBrain/ToppyCox-7B) * [OmnicromsBrain/StoryFusion-7B](https://huggingface.co/OmnicromsBrain/StoryFusion-7B) * [Aratako/SniffyOtter-7B-Novel-Writing-NSFW](https://huggingface.co/Aratako/SniffyOtter-7B-Novel-Writing-NSFW) * [FPHam/Autolycus-Mistral_7B](https://huggingface.co/FPHam/Autolycus-Mistral_7B) * [FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B](https://huggingface.co/FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B) * [OmnicromsBrain/EverythingBagel-DPO-7B](https://huggingface.co/OmnicromsBrain/EverythingBagel-DPO-7B) * [OmnicromsBrain/Eros_Scribe-7b](https://huggingface.co/OmnicromsBrain/Eros_Scribe-7b) ### Configuration The following YAML configuration was used to produce this model: ```yaml ### This the config.yml for ABC_Books/test003 ### models: - model: OmnicromsBrain/Eros_Scribe-7b parameters: weight: 0.1111 density: 0.9 - model: OmnicromsBrain/EverythingBagel-DPO-7B parameters: weight: 0.0556 density: 0.9 - model: OmnicromsBrain/NeuralStar_Fusion-7B parameters: weight: 0.0556 density: 0.9 - model: OmnicromsBrain/StoryFusion-7B parameters: weight: 0.0556 density: 0.9 - model: OmnicromsBrain/ToppyCox-7B parameters: weight: 0.0556 density: 0.9 - model: Aratako/Antler-7B-Novel-Writing parameters: weight: 0.1111 density: 0.9 - model: Aratako/SniffyOtter-7B-Novel-Writing-NSFW parameters: weight: 0.1111 density: 0.9 - model: FPHam/Autolycus-Mistral_7B parameters: weight: 0.0556 density: 0.9 - model: FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B parameters: weight: 0.0556 density: 0.9 - model: FPHam/Writing_Partner_Mistral_7B parameters: weight: 0.0556 density: 0.9 ### Here they add in a previously worked model that understands writing styles, literature, poetry, psychology, and philosophy, but still retain some bias and is censored. They also add in the two previous merges to the final merge as well. ### - model: MrRobotoAI/Hathor parameters: weight: 0.1111 density: 0.9 ### Notice that the last merge will only constitute a small portion of the final model ### - model: MrRobotoAI/Test002a parameters: weight: 0.0556 density: 0.9 - model: MrRobotoAI/Test001a parameters: weight: 0.1111 density: 0.9 merge_method: dare_ties base_model: MrRobotoAI/Test001a parameters: normalize: true int8_mask: true dtype: float16 ```