ppbrown commited on
Commit
a8604b2
1 Parent(s): baf8728

Create METHODOLOGY-adamw.md

Browse files
Files changed (1) hide show
  1. METHODOLOGY-adamw.md +29 -0
METHODOLOGY-adamw.md ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ This describes my first experiments with this dataset.
2
+ I will also be tweaking it with adafactor
3
+
4
+ I was initially playing around with loras, to see which combination of images did best.
5
+ Previously, I spent a LOT of time experimenting with training and getting nowhere, so I decided to stick with
6
+ adaptive ones. Sadly, even a 4090 cant do a finetune of SDXL with an adaptive optimizer, which is why I was
7
+ training Loras.
8
+
9
+ My results were very mixed. Eventually, I figured out that stable diffusion, while seemingly magic,
10
+ cannot TRUELY figure out "good" anime style, if you throw a whole bunch of MIXED styles at it. So I
11
+ decided to drastically change my strategy, and throw out everything that was not strictly in one style.
12
+
13
+ Once I got down to <200 images images, and had a reasonable lora, I decided to give a full finetune a try,
14
+ "the hard way" (ie: no adaptive optimizer)
15
+
16
+ So, I set EMA=CPU (because not enough VRAM to fit in GPU) played with the learning rate a little, and...
17
+ that was it? Nope!
18
+
19
+ I was training with 100 epoch, and having onetrainer do an image sample every few epochs.
20
+ ut, due to disk space, I was only doing saves every 20 or so.
21
+
22
+ When I looked back at the image samples, I noticed that the one I liked best, was actually around epoch 72.
23
+ didnt know what e71,72, or 73 looked like... and I didnt even have a save for 72 either!
24
+
25
+ What I did have, was a save at 70. So I configured OneTrainer to do a new run, starting with my saved model at
26
+ 70. This time, however, I disabled warmup, and also EMA. I then set OneTrainer to make a save every epoch, and a preview every epoch.
27
+
28
+ The previews looked kind of like my original series! Thus encouraged, I decided to try out all 10 of the models,
29
+ just in case. It turns out that I liked epoch 78/100 the best.. so here we are :)