--- base_model: - migtissera/SynthIA-70B-v1.2b - 152334H/miqu-1-70b-sf - Xwin-LM/Xwin-LM-70B-V0.1 library_name: transformers tags: - mergekit - merge --- # merge This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit). ## Merge Details ### Merge Method This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method. ### Models Merged The following models were included in the merge: * [migtissera/SynthIA-70B-v1.2b](https://huggingface.co/migtissera/SynthIA-70B-v1.2b) * [152334H/miqu-1-70b-sf](https://huggingface.co/152334H/miqu-1-70b-sf) * [Xwin-LM/Xwin-LM-70B-V0.1](https://huggingface.co/Xwin-LM/Xwin-LM-70B-V0.1) ### Configuration The following YAML configuration was used to produce this model: ```yaml merge_method: linear parameters: weight: 1.0 slices: - sources: - model: 152334H/miqu-1-70b-sf layer_range: [0, 24] - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [0, 24] - sources: - model: migtissera/SynthIA-70B-v1.2b layer_range: [10, 34] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [25, 49] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [50, 74] - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [25, 49] - model: migtissera/SynthIA-70B-v1.2b layer_range: [35, 59] - sources: - model: 152334H/miqu-1-70b-sf layer_range: [79, 80] - model: Xwin-LM/Xwin-LM-70B-V0.1 layer_range: [50, 51] - model: migtissera/SynthIA-70B-v1.2b layer_range: [60, 61] dtype: float16 tokenizer_source: model:152334H/miqu-1-70b-sf ```