This folder contains models trained for the following 4 couplings 1. Suletta and Miorine from Suisei no Majo 2. Maple and Sally from Bofuri 3. Nozomi and Mizore from Sound Euphonium 4. Eila and Sanya from Strike witches You can prompt them with above names except that for Maple and Sally you should use BoMaple and BoSally. You can naturally get the trained couplings but it is also possible to pair characters from different series with either the native trained model (above) or lora (below): ![native-00008-4163909665](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/native-00008-4163909665.png) ![lora-00017-691849602](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/lora-00017-691849602.png) ### Dataset Total size 1394 Suremio 285 - Suletta: 81 - Miorine: 85 - Suremio: 119 MapleSally 235 - Maple: 67 - Sally: 22 - MapleSally 33 Augmented with face cropping Nozomizo 272 - Mizore: 81 - Nozomi: 81 - Nozomizo: 110 Eilanya 326 - Eila 67 - Sanya: 78 - Eilanya: 181 Regularization 276 ### Base model [NMFSAN](https://huggingface.co/Crosstyan/BPModel/blob/main/NMFSAN/README.md) so you can have different styles ### Native training Trained with [Kohya trainer](https://github.com/Linaqruf/kohya-trainer) - training of text encoder turned on - learning rate 1e-6 - batch size 1 - clip skip 2 - number of training steps 64240 *Examples* ![native-00001-2487967310](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/native-00001-2487967310.png) ![native-00010-2248582025](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/native-00010-2248582025.png) ![native-00014-3296158149](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/native-00014-3296158149.png) ![native-00048-3129463315](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/native-00048-3129463315.png) ### LoRA embedding Please refer to [LoRA Training Guide](https://rentry.org/lora_train) - training of text encoder turned on - network dimension 128 - learning rate 1e-4 - batch size 6 - clip skip 2 - number of training steps 69700 (50 epochs) *Examples* ![lora-00012-1413123683](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/lora-00012-1413123683.png) ![lora-00014-1636023638](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/lora-00014-1636023638.png) ![lora-00015-969084934](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/lora-00015-969084934.png) ![lora-00071-2365359196](https://huggingface.co/alea31415/YuriDiffusion/resolve/main/suremio-nozomizo-eilanya-maplesally/samples/lora-00071-2365359196.png)