### This the config.yml for ABC_Books/test003 ### models: - model: OmnicromsBrain/Eros_Scribe-7b parameters: weight: 0.1111 density: 0.9 - model: OmnicromsBrain/EverythingBagel-DPO-7B parameters: weight: 0.0556 density: 0.9 - model: OmnicromsBrain/NeuralStar_Fusion-7B parameters: weight: 0.0556 density: 0.9 - model: OmnicromsBrain/StoryFusion-7B parameters: weight: 0.0556 density: 0.9 - model: OmnicromsBrain/ToppyCox-7B parameters: weight: 0.0556 density: 0.9 - model: Aratako/Antler-7B-Novel-Writing parameters: weight: 0.1111 density: 0.9 - model: Aratako/SniffyOtter-7B-Novel-Writing-NSFW parameters: weight: 0.1111 density: 0.9 - model: FPHam/Autolycus-Mistral_7B parameters: weight: 0.0556 density: 0.9 - model: FPHam/Karen_TheEditor_V2_CREATIVE_Mistral_7B parameters: weight: 0.0556 density: 0.9 - model: FPHam/Writing_Partner_Mistral_7B parameters: weight: 0.0556 density: 0.9 ### Here they add in a previously worked model that understands writing styles, literature, poetry, psychology, and philosophy, but still retain some bias and is censored. They also add in the two previous merges to the final merge as well. ### - model: MrRobotoAI/Hathor parameters: weight: 0.1111 density: 0.9 ### Notice that the last merge will only constitute a small portion of the final model ### - model: MrRobotoAI/Test002a parameters: weight: 0.0556 density: 0.9 - model: MrRobotoAI/Test001a parameters: weight: 0.1111 density: 0.9 merge_method: dare_ties base_model: MrRobotoAI/Test001a parameters: normalize: true int8_mask: true dtype: float16