--- base_model: - MrRobotoAI/Thor-v1.1-8b-1024k - aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K - MrRobotoAI/Thor-v1.1d-8b-1024k - MrRobotoAI/Thor-v1.1e-8b-1024k library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the linear [DARE](https://arxiv.org/abs/2311.03099) merge method using [aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K](https://huggingface.co/aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K) as a base. ### Models Merged The following models were included in the merge: * [MrRobotoAI/Thor-v1.1-8b-1024k](https://huggingface.co/MrRobotoAI/Thor-v1.1-8b-1024k) * [MrRobotoAI/Thor-v1.1d-8b-1024k](https://huggingface.co/MrRobotoAI/Thor-v1.1d-8b-1024k) * [MrRobotoAI/Thor-v1.1e-8b-1024k](https://huggingface.co/MrRobotoAI/Thor-v1.1e-8b-1024k) ### Configuration The following YAML configuration was used to produce this model: ```yaml models: - model: MrRobotoAI/Thor-v1.1e-8b-1024k parameters: weight: 0.55 density: 0.9 - model: MrRobotoAI/Thor-v1.1-8b-1024k parameters: weight: 0.125 density: 0.9 - model: MrRobotoAI/Thor-v1.1d-8b-1024k parameters: weight: 0.2 density: 0.9 - model: aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K parameters: weight: 0.125 density: 0.9 merge_method: dare_linear base_model: aifeifei798/llama3-8B-DarkIdol-2.2-Uncensored-1048K parameters: normalize: true dtype: float16 ```