ppbrown commited on
Commit
3a63f63
1 Parent(s): a8604b2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +4 -28
README.md CHANGED
@@ -16,33 +16,9 @@ I'm including EVERYTHING I used to create my onegirl200 model:
16
  I've been playing around with the thousands of images I've filtered so far from danbooro, at
17
  https://huggingface.co/datasets/ppbrown/danbooru-cleaned
18
  So, the images here are a strict subset of those images.
19
- I also used the tagging ALMOST as-is: I only added one tag: "anime"
20
 
21
- I was initially playing around with loras, to see which combination of images did best.
22
- Previously, I spent a LOT of time experimenting with training and getting nowhere, so I decided to stick with adaptive ones.
23
- Sadly, even a 4090 cant do a finetune of SDXL with an adaptive optimizer, which is why I was training Loras.
24
-
25
- My results were very mixed. Eventually, I figured out that stable diffusion, while seemingly magic,
26
- cannot TRUELY figure out "good" anime style, if you throw a whole bunch of MIXED styles at it. So I decided to
27
- drastically change my strategy, and throw out everything that was not strictly in one style.
28
-
29
- Once I got down to <200 images images, and had a reasonable lora, I decided to give a full finetune a try, "the hard way"
30
- (ie: no adaptive optimizer)
31
-
32
- So, I set EMA=CPU (because not enough VRAM to fit in GPU) played with the learning rate a little, and.. that was it?
33
- Nope!
34
-
35
- I was training with 100 epoch, and having onetrainer do an image sample every few epochs. But, due to disk space,
36
- I was only doing saves every 20 or so.
37
-
38
- When I looked back at the image samples, I noticed that the one I liked best, was actually around epoch 72..
39
- I didnt know what e71,72, or 73 looked like... and I didnt even have a save for 72 either!
40
-
41
- What I did have, was a save at 70. So I configured OneTrainer to do a new run, starting with my saved model at 70.
42
- This time, however, I disabled warmup, and also EMA.
43
- I then set OneTrainer to make a save every epoch, and a preview every epoch.
44
-
45
- The previews looked kind of like my original series!
46
- Thus encouraged, I decided to try out all 10 of the models, just in case.
47
- It turns out that I liked epoch 78/100 the best.. so here we are :)
48
 
 
 
16
  I've been playing around with the thousands of images I've filtered so far from danbooro, at
17
  https://huggingface.co/datasets/ppbrown/danbooru-cleaned
18
  So, the images here are a strict subset of those images.
19
+ I also used their tagging ALMOST as-is. I only added one tag: "anime"
20
 
21
+ See [METHODOLOGY-adamw.md] for a detailed description of what I personally did to coax a model out
22
+ of this dataset.
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
23
 
24
+ I also plan to try other training methods.