Update README.md
Browse files
README.md
CHANGED
@@ -30,7 +30,23 @@ Negative prompt : (worst quality:1.6),(low quality:1.4),(normal quality:1.2),low
|
|
30 |
<img src="https://files.catbox.moe/70gi4h.jpg" width="1700" height="">
|
31 |
|
32 |
|
33 |
-
# <
|
34 |
|
35 |
<img src="https://i.imgur.com/5K87RDg.jpeg" width="1700" height="">
|
36 |
<img src="https://i.imgur.com/Qseddy9.jpeg" width="1700" height="">
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
30 |
<img src="https://files.catbox.moe/70gi4h.jpg" width="1700" height="">
|
31 |
|
32 |
|
33 |
+
# <lora: 7th anime XL BASE :0.255:lbw=1:0,1,1,1,0.9,0.7,0.5,0.3,0.1,0,0,0,0.1,0.3,0.5,0.7,0.9,1,1,1>.
|
34 |
|
35 |
<img src="https://i.imgur.com/5K87RDg.jpeg" width="1700" height="">
|
36 |
<img src="https://i.imgur.com/Qseddy9.jpeg" width="1700" height="">
|
37 |
+
|
38 |
+
# <Marge: The Recipe :0.7>
|
39 |
+
1.
|
40 |
+
Merge Animagine 3.0 and 3.1 using a base alpha of 0.49, and merge layers from IN00 to OUT11 at 0.82.
|
41 |
+
2.
|
42 |
+
Train a model on sd_xl_base_1.0_0.9vae.safetensors with ~4.6 million images on A100x4, at a learning rate of 1e-5, for ~2 epochs, and then, for compatibility with Animagine models' CLIP modules, further train it with a dataset of 164 AI-generated images to refine CLIP and Unet, using PRODIGY, Initial D at 1e-06, D Coefficient at 0.9, and a batch size of 4 for 1500 steps.
|
43 |
+
3.
|
44 |
+
Merge 1. and 2. using two sets of coefficients:
|
45 |
+
・Set 1: 0.2, 0.6, 0.8, 0.9, 0.0, 0.8, 0.4, 1.0, 0.7, 0.9, 0.3, 0.1, 0.1, 0.5, 0.6, 0.0, 1.0, 0.6, 0.5, 0.5
|
46 |
+
・Set 2: 0.9, 0.8, 0.6, 0.3, 0.9, 0.1, 0.4, 0.7, 0.4, 0.6, 0.2, 0.3, 0.0, 0.8, 0.3, 0.7, 0.7, 0.8, 0.2, 0.3.
|
47 |
+
4.
|
48 |
+
Merge Set 1 and Set 2 using a base alpha of 0.79 and merge layers from IN00 to OUT11 at 0.73 to create Set 3.
|
49 |
+
5.
|
50 |
+
Train a LoRA based on Set 3 with a curated dataset of 12,018 AI-generated images, Lion optimizer, batch size of 4, gradient accumulation steps of 16, and learning rate 3e-5 for 4 epochs. This model is then blended into Set 3 itself at a strength of 0.2, resulting in the creation of 7th anime B.
|
51 |
+
6.
|
52 |
+
Train another LoRA based on 7th anime B with the same dataset as described in step 2. but with Lion optimizer, batch size of 4, lr_scheduler_num_cycles at 5, and learning rate 1e-5 for 80 epochs. This model is then blended into 7th anime B itself at a strength of 0.366, finally resulting in the creation of 7th anime A.
|