adds missing dataset links for Fubuki and Momoka
Browse files- fubuki/README.md +1 -0
- momoka/README.md +1 -0
fubuki/README.md
CHANGED
@@ -49,6 +49,7 @@ For her smug expression: `smug, jitome, smile` + `closed mouth` / `open mouth`
|
|
49 |
- Training resolution 832x832.
|
50 |
- For some reason, 832 performed better than 768 on Fubuki's LoRA, unlike the Junko's. The difference was not substantial, however.
|
51 |
- Trained without VAE.
|
|
|
52 |
|
53 |
## Revisions
|
54 |
- v1b (2023-02-18)
|
|
|
49 |
- Training resolution 832x832.
|
50 |
- For some reason, 832 performed better than 768 on Fubuki's LoRA, unlike the Junko's. The difference was not substantial, however.
|
51 |
- Trained without VAE.
|
52 |
+
- [Training dataset available here.](https://mega.nz/folder/b25mDTIA#lGdhQu6tBUGXco6-aPEAOg)
|
53 |
|
54 |
## Revisions
|
55 |
- v1b (2023-02-18)
|
momoka/README.md
CHANGED
@@ -48,6 +48,7 @@ For her smug expression: `smug, open mouth, sharp teeth, :3, :d`
|
|
48 |
- This one also came out better at 832 vs 768.
|
49 |
- It's not clear to me why some LoRAs perform substantially better at 768 and others at 832.
|
50 |
- Trained without VAE.
|
|
|
51 |
|
52 |
## Revisions
|
53 |
- v1c (2023-02-19)
|
|
|
48 |
- This one also came out better at 832 vs 768.
|
49 |
- It's not clear to me why some LoRAs perform substantially better at 768 and others at 832.
|
50 |
- Trained without VAE.
|
51 |
+
- [Training dataset available here.](https://mega.nz/folder/fi5zxDpb#J6ABI5i8ZFnTONVYiRlKHg)
|
52 |
|
53 |
## Revisions
|
54 |
- v1c (2023-02-19)
|