|
--- |
|
license: openrail |
|
tags: |
|
- sdxl |
|
--- |
|
# Current sample model |
|
|
|
https://civitai.com/models/508420 |
|
|
|
The above is SDXL, and not very good. A better one is under way. |
|
|
|
# Overview |
|
This is my attempt at creating a truely open source SDXL model that people might be interested in using.... |
|
and perhaps copying the spirit and creating other open source models. |
|
I'm including EVERYTHING I used to create my onegirl200 model: |
|
* The images |
|
* The captions |
|
* The OneTrainer json preset file |
|
* And my specific method i used to get here. |
|
|
|
I've been playing around with the thousands of images I've filtered so far from danbooro, at |
|
https://huggingface.co/datasets/ppbrown/danbooru-cleaned |
|
So, the images here are a strict subset of those images. |
|
I also used their tagging ALMOST as-is. I only added one tag: "anime" |
|
|
|
See [METHODOLOGY-adamw.md] for a detailed description of what I personally did to coax a model out |
|
of this dataset. |
|
|
|
I also plan to try other training methods. |
|
|
|
# Memory usage tips |
|
I am using an RTX4090 card, which has 24 GB of VRAM. So I optimize for best quality, and then fastest speed, |
|
that I can fit on my card. |
|
Currently, that means bf16 SDXL or Cascade model finetunes, using "Default" attention, and no gradient saves. |
|
|
|
You can save memory, at the sacrifice of speed, by enabling gradient saving. You can save more memory, |
|
at the sacrifice of a little quality, by switching to Xformers attention. |
|
Using those adjustments, you can run adafactor/adafactor finetunes on a 16GB card. |
|
|