Update README.md
Browse files
README.md
CHANGED
@@ -3,8 +3,10 @@ license: openrail
|
|
3 |
tags:
|
4 |
- sdxl
|
5 |
---
|
|
|
6 |
https://civitai.com/models/508420
|
7 |
|
|
|
8 |
This is my attempt at creating a truely open source SDXL model that people might be interested in using....
|
9 |
and perhaps copying the spirit and creating other open source models.
|
10 |
I'm including EVERYTHING I used to create my onegirl200 model:
|
@@ -21,4 +23,13 @@ I also used their tagging ALMOST as-is. I only added one tag: "anime"
|
|
21 |
See [METHODOLOGY-adamw.md] for a detailed description of what I personally did to coax a model out
|
22 |
of this dataset.
|
23 |
|
24 |
-
I also plan to try other training methods.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
tags:
|
4 |
- sdxl
|
5 |
---
|
6 |
+
# Current sample model
|
7 |
https://civitai.com/models/508420
|
8 |
|
9 |
+
# Overview
|
10 |
This is my attempt at creating a truely open source SDXL model that people might be interested in using....
|
11 |
and perhaps copying the spirit and creating other open source models.
|
12 |
I'm including EVERYTHING I used to create my onegirl200 model:
|
|
|
23 |
See [METHODOLOGY-adamw.md] for a detailed description of what I personally did to coax a model out
|
24 |
of this dataset.
|
25 |
|
26 |
+
I also plan to try other training methods.
|
27 |
+
|
28 |
+
# Memory usage tips
|
29 |
+
I am using an RTX4090 card, which has 24 GB of VRAM. So I optimize for best quality, and then fastest speed,
|
30 |
+
that I can fit on my card.
|
31 |
+
Currently, that means bf16 SDXL or Cascade model finetunes, using "Default" attention, and no gradient saves.
|
32 |
+
|
33 |
+
You can save memory, at the sacrifice of speed, by enabling gradient saving. You can save more memory,
|
34 |
+
at the sacrifice of a little quality, by switching to Xformers attention.
|
35 |
+
Using those adjustments, you can run adafactor/adafactor finetunes on a 16GB card.
|