File size: 2,208 Bytes
04677f5
 
 
 
 
 
 
 
 
 
 
0a11386
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
04677f5
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
---
base_model:
- jeiku/Theory_of_Mind_Mistral
- jeiku/Gnosis_Reformatted_Mistral
- Undi95/Mistral-7B-small_pippa_limaRP-v3-lora
- jeiku/Theory_of_Mind_Roleplay_Mistral
tags:
- mergekit
- merge

---

![image/png](https://cdn-uploads.huggingface.co/production/uploads/626dfb8786671a29c715f8a9/uAfKDeavEBisJYmDxjJsE.png)

technicolor consists of the following merge, which was then merged with the below LoRAs to produce rainbow:

```yaml
slices:
  - sources:
      - model: paulml/OGNO-7B
        layer_range: [0, 32]
      - model: SanjiWatsuki/Kunoichi-DPO-v2-7B
        layer_range: [0, 32]
merge_method: slerp
base_model: SanjiWatsuki/Kunoichi-DPO-v2-7B
parameters:
  t:
    - filter: self_attn
      value: [0, 0.5, 0.3, 0.7, 1]
    - filter: mlp
      value: [1, 0.5, 0.7, 0.3, 0]
    - value: 0.5
dtype: bfloat16
```

# rainbow

This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).

## Merge Details
### Merge Method

This model was merged using the [task arithmetic](https://arxiv.org/abs/2212.04089) merge method using technicolor as a base.

### Models Merged

The following models were included in the merge:
* technicolor + [jeiku/Theory_of_Mind_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Mistral)
* technicolor + [jeiku/Gnosis_Reformatted_Mistral](https://huggingface.co/jeiku/Gnosis_Reformatted_Mistral)
* technicolor + [Undi95/Mistral-7B-small_pippa_limaRP-v3-lora](https://huggingface.co/Undi95/Mistral-7B-small_pippa_limaRP-v3-lora)
* technicolor + [jeiku/Theory_of_Mind_Roleplay_Mistral](https://huggingface.co/jeiku/Theory_of_Mind_Roleplay_Mistral)

### Configuration

The following YAML configuration was used to produce this model:

```yaml
merge_method: task_arithmetic
base_model: technicolor
parameters:
  normalize: true
models:
  - model: technicolor+jeiku/Theory_of_Mind_Roleplay_Mistral
    parameters:
      weight: 1
  - model: technicolor+jeiku/Theory_of_Mind_Mistral
    parameters:
      weight: 1
  - model: technicolor+jeiku/Gnosis_Reformatted_Mistral
    parameters:
      weight: 1
  - model: technicolor+Undi95/Mistral-7B-small_pippa_limaRP-v3-lora
    parameters:
      weight: 1      
dtype: float16
```