jeiku commited on
Commit
913d320
1 Parent(s): 9848a18

Upload 8 files

Browse files
.gitattributes CHANGED
@@ -33,3 +33,10 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
 
 
 
 
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ Bologna-f16.gguf filter=lfs diff=lfs merge=lfs -text
37
+ Bologna-Q2_K.gguf filter=lfs diff=lfs merge=lfs -text
38
+ Bologna-Q3_K.gguf filter=lfs diff=lfs merge=lfs -text
39
+ Bologna-Q4_K_S.gguf filter=lfs diff=lfs merge=lfs -text
40
+ Bologna-Q4_K.gguf filter=lfs diff=lfs merge=lfs -text
41
+ Bologna-Q5_K.gguf filter=lfs diff=lfs merge=lfs -text
42
+ Bologna-Q6_K.gguf filter=lfs diff=lfs merge=lfs -text
Bologna-Q2_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:5462dfc9f65872e31f8a77d34b6b2d61ec2d455ac06c7e75a7ebf018236c07c5
3
+ size 1083755840
Bologna-Q3_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4a3e002c26991310f1841906e2f116a5d837e39836690244061c82939a4bf0c
3
+ size 1391419200
Bologna-Q4_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:55a29905cbb40bc2a3f270d4b192120b5764e276260fab0b0e9905aae0dc5d3f
3
+ size 1708595520
Bologna-Q4_K_S.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:10b3e871abdc5823338ad8b65e399068decfe062e823838135aa74a39ea875b8
3
+ size 1620695360
Bologna-Q5_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2ad9abed75bdf776ceb222fd5e2553ba6772ba5e951ebc8185f2dabc003f7af6
3
+ size 1993390400
Bologna-Q6_K.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:fd62f712c3f8491685d4b4d7631e6d7d9e350782ad2ef1b6d3d9ea83f9a1946a
3
+ size 2295984960
Bologna-f16.gguf ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:80dfdbe4e226097c3e3f2c5da8be246184a013327a51ac21abda2f3edafdac71
3
+ size 5593341696
README.md ADDED
@@ -0,0 +1,101 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - jeiku/Rosa_v1_3B
4
+ - jeiku/Toxic_DPO_StableLM
5
+ - jeiku/Rosa_v1_3B
6
+ - jeiku/Futa_Erotica_StableLM
7
+ - jeiku/Rosa_v1_3B
8
+ - jeiku/Humiliation_StableLM
9
+ - jeiku/Rosa_v1_3B
10
+ - jeiku/PIPPA_128_StableLM
11
+ - jeiku/Rosa_v1_3B
12
+ - jeiku/Bluemoon_cleaned_StableLM
13
+ - jeiku/Rosa_v1_3B
14
+ - jeiku/Theory_of_Mind_128_StableLM
15
+ - jeiku/Rosa_v1_3B
16
+ - jeiku/Gnosis_256_StableLM
17
+ - jeiku/Rosa_v1_3B
18
+ - jeiku/Alpaca_128_StableLM
19
+ - jeiku/Rosa_v1_3B
20
+ - jeiku/Everything_v3_128_StableLM
21
+ - jeiku/Rosa_v1_3B
22
+ - jeiku/No_Robots_Alpaca_StableLM
23
+ - jeiku/Rosa_v1_3B
24
+ - jeiku/LimaRP_StableLM
25
+ - jeiku/Rosa_v1_3B
26
+ - jeiku/Theory_of_Mind_RP_128_StableLM
27
+ tags:
28
+ - mergekit
29
+ - merge
30
+
31
+ ---
32
+ # Bologna
33
+
34
+ This is a merge of pre-trained language models created using [mergekit](https://github.com/cg123/mergekit).
35
+
36
+ ## Merge Details
37
+ ### Merge Method
38
+
39
+ This model was merged using the [linear](https://arxiv.org/abs/2203.05482) merge method.
40
+
41
+ ### Models Merged
42
+
43
+ The following models were included in the merge:
44
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Toxic_DPO_StableLM](https://huggingface.co/jeiku/Toxic_DPO_StableLM)
45
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Futa_Erotica_StableLM](https://huggingface.co/jeiku/Futa_Erotica_StableLM)
46
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Humiliation_StableLM](https://huggingface.co/jeiku/Humiliation_StableLM)
47
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/PIPPA_128_StableLM](https://huggingface.co/jeiku/PIPPA_128_StableLM)
48
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Bluemoon_cleaned_StableLM](https://huggingface.co/jeiku/Bluemoon_cleaned_StableLM)
49
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Theory_of_Mind_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_128_StableLM)
50
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Gnosis_256_StableLM](https://huggingface.co/jeiku/Gnosis_256_StableLM)
51
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Alpaca_128_StableLM](https://huggingface.co/jeiku/Alpaca_128_StableLM)
52
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Everything_v3_128_StableLM](https://huggingface.co/jeiku/Everything_v3_128_StableLM)
53
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/No_Robots_Alpaca_StableLM](https://huggingface.co/jeiku/No_Robots_Alpaca_StableLM)
54
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/LimaRP_StableLM](https://huggingface.co/jeiku/LimaRP_StableLM)
55
+ * [jeiku/Rosa_v1_3B](https://huggingface.co/jeiku/Rosa_v1_3B) + [jeiku/Theory_of_Mind_RP_128_StableLM](https://huggingface.co/jeiku/Theory_of_Mind_RP_128_StableLM)
56
+
57
+ ### Configuration
58
+
59
+ The following YAML configuration was used to produce this model:
60
+
61
+ ```yaml
62
+ merge_method: linear
63
+ models:
64
+ - model: jeiku/Rosa_v1_3B+jeiku/No_Robots_Alpaca_StableLM
65
+ parameters:
66
+ weight: 1
67
+ - model: jeiku/Rosa_v1_3B+jeiku/Toxic_DPO_StableLM
68
+ parameters:
69
+ weight: 1
70
+ - model: jeiku/Rosa_v1_3B+jeiku/Alpaca_128_StableLM
71
+ parameters:
72
+ weight: 1
73
+ - model: jeiku/Rosa_v1_3B+jeiku/Everything_v3_128_StableLM
74
+ parameters:
75
+ weight: 1
76
+ - model: jeiku/Rosa_v1_3B+jeiku/Futa_Erotica_StableLM
77
+ parameters:
78
+ weight: 1
79
+ - model: jeiku/Rosa_v1_3B+jeiku/Gnosis_256_StableLM
80
+ parameters:
81
+ weight: 1
82
+ - model: jeiku/Rosa_v1_3B+jeiku/Humiliation_StableLM
83
+ parameters:
84
+ weight: 1
85
+ - model: jeiku/Rosa_v1_3B+jeiku/Theory_of_Mind_128_StableLM
86
+ parameters:
87
+ weight: 1
88
+ - model: jeiku/Rosa_v1_3B+jeiku/PIPPA_128_StableLM
89
+ parameters:
90
+ weight: 1
91
+ - model: jeiku/Rosa_v1_3B+jeiku/LimaRP_StableLM
92
+ parameters:
93
+ weight: 1
94
+ - model: jeiku/Rosa_v1_3B+jeiku/Theory_of_Mind_RP_128_StableLM
95
+ parameters:
96
+ weight: 1
97
+ - model: jeiku/Rosa_v1_3B+jeiku/Bluemoon_cleaned_StableLM
98
+ parameters:
99
+ weight: 1
100
+ dtype: float16
101
+ ```