--- base_model: - ammarali32/multi_verse_model - jeiku/Theory_of_Mind_Roleplay_Mistral - ammarali32/multi_verse_model - jeiku/Alpaca_NSFW_Shuffled_Mistral - ammarali32/multi_verse_model - jeiku/Theory_of_Mind_Mistral - ammarali32/multi_verse_model - jeiku/Gnosis_Reformatted_Mistral - ammarali32/multi_verse_model - ammarali32/multi_verse_model - jeiku/Re-Host_Limarp_Mistral - ammarali32/multi_verse_model - jeiku/Luna_LoRA_Mistral library_name: transformers license: cc-by-nc-4.0 tags: - mergekit - merge language: - en --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details This merge is entirely experimental, I've only tested it a few times but it seems to work? Thanks for all the loras jeiku. I keep getting driver crashes training my own :\ ### Merge Method This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) as a base. ### Models Merged The following models were included in the merge: * [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Theory_of_Mind_Roleplay_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Roleplay_Mistral) * [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Alpaca_NSFW_Shuffled_Mistral](https://huggingface.co/jeiku/Alpaca_NSFW_Shuffled_Mistral) * [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Theory_of_Mind_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Mistral) * [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Gnosis_Reformatted_Mistral](https://huggingface.co/jeiku/Gnosis_Reformatted_Mistral) * [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Re-Host_Limarp_Mistral](https://huggingface.co/jeiku/Re-Host_Limarp_Mistral) * [ammarali32/multi_verse_model](https://huggingface.co/ammarali32/multi_verse_model) + [jeiku/Luna_LoRA_Mistral](https://huggingface.co/jeiku/Luna_LoRA_Mistral) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: task_arithmetic base_model: ammarali32/multi_verse_model parameters: normalize: true models: - model: ammarali32/multi_verse_model+jeiku/Gnosis_Reformatted_Mistral parameters: weight: 0.7 - model: ammarali32/multi_verse_model+jeiku/Theory_of_Mind_Roleplay_Mistral parameters: weight: 0.65 - model: ammarali32/multi_verse_model+jeiku/Luna_LoRA_Mistral parameters: weight: 0.5 - model: ammarali32/multi_verse_model+jeiku/Re-Host_Limarp_Mistral parameters: weight: 0.8 - model: ammarali32/multi_verse_model+jeiku/Alpaca_NSFW_Shuffled_Mistral parameters: weight: 0.75 - model: ammarali32/multi_verse_model+jeiku/Theory_of_Mind_Mistral parameters: weight: 0.7 dtype: float16 ```