hollowstrawberry
commited on
Commit
•
39a5556
1
Parent(s):
68b0e88
Update README.md
Browse files
README.md
CHANGED
@@ -35,7 +35,7 @@ language:
|
|
35 |
* [Prompt Matrix](#matrix)
|
36 |
* [Ultimate Upscaler](#ultimate)
|
37 |
* [ControlNet](#controlnet)
|
38 |
-
* [Lora Training](#train)
|
39 |
* [Creating a dataset](#dataset)
|
40 |
* [Training Parameters](#trainparams)
|
41 |
* [Testing your results](#traintest)
|
@@ -376,19 +376,19 @@ There are also alternative **diff** versions of each ControlNet model, which pro
|
|
376 |
|
377 |
|
378 |
|
379 |
-
# Lora Training <a name="train"></a>[▲](#index)
|
380 |
|
381 |
To train a [Lora ▲](#lora) yourself is an achievement. It's certainly doable, but there are many variables involved, and a lot of work depending on your workflow. It's somewhere between an art and a science.
|
382 |
|
383 |
-
You can do it on your own computer if you have at least 8 GB of VRAM. However, I will
|
384 |
|
385 |
-
Here are some classic resources if you want to read about the topic in depth.
|
386 |
* [Lora Training on Rentry](https://rentry.org/lora_train)
|
387 |
* [Training Science on Rentry](https://rentry.org/lora-training-science)
|
388 |
-
* [Original Kohya Trainer (Dreambooth method)](https://colab.research.google.com/github/Linaqruf/kohya-trainer/blob/main/kohya-LoRA-dreambooth.ipynb
|
389 |
* [List of trainer parameters](https://github.com/derrian-distro/LoRA_Easy_Training_Scripts#list-of-arguments)
|
390 |
|
391 |
-
With those way smarter resources out of the way, I'll try to produce a short and simple guide for you to
|
392 |
|
393 |
1. We will be using [this collab document](https://github.com/derrian-distro/LoRA_Easy_Training_Scripts#list-of-arguments). You can copy it into your own Google Drive if you want.
|
394 |
|
@@ -433,13 +433,25 @@ With those way smarter resources out of the way, I'll try to produce a short and
|
|
433 |
|
434 |
![Comparison of Lora training results](images/loratrain.png)
|
435 |
|
436 |
-
Look at that, it gets more detailed over time! This was a successful character Lora, at least at first glance. You would need to test different seeds, prompts and scenarios to be sure.
|
437 |
|
438 |
-
It is common that your Lora "fries" or distorts your images when used at high weights such as 1, specially if it's overcooked. A weight of 0.5 to 0.8 is acceptable here, you may need to tweak the learning rate and network dim for this. If you're reading this and know the magic sauce, let us know.
|
439 |
|
440 |
-
|
441 |
|
442 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
443 |
|
444 |
|
445 |
|
@@ -447,7 +459,7 @@ With those way smarter resources out of the way, I'll try to produce a short and
|
|
447 |
|
448 |
That's it, that's the end of this guide for now. Thank you for reading. If you want to correct me or contribute to the guide you can open an issue or pull request and I'll take a look soon.
|
449 |
|
450 |
-
I have [a separate repo that aggregates vtuber Loras
|
451 |
|
452 |
Cheers.
|
453 |
|
|
|
35 |
* [Prompt Matrix](#matrix)
|
36 |
* [Ultimate Upscaler](#ultimate)
|
37 |
* [ControlNet](#controlnet)
|
38 |
+
* [Lora Training for beginners](#train)
|
39 |
* [Creating a dataset](#dataset)
|
40 |
* [Training Parameters](#trainparams)
|
41 |
* [Testing your results](#traintest)
|
|
|
376 |
|
377 |
|
378 |
|
379 |
+
# Lora Training for beginners <a name="train"></a>[▲](#index)
|
380 |
|
381 |
To train a [Lora ▲](#lora) yourself is an achievement. It's certainly doable, but there are many variables involved, and a lot of work depending on your workflow. It's somewhere between an art and a science.
|
382 |
|
383 |
+
You can do it on your own computer if you have at least 8 GB of VRAM. However, I will be using a Google Collab document for learning purposes.
|
384 |
|
385 |
+
Here are some classic resources if you want to read about the topic in depth. Rentry may be blocked by your internet provider, in which case you may use a VPN or try putting it through [Google Translate](https://translate.google.cl/?op=websites).
|
386 |
* [Lora Training on Rentry](https://rentry.org/lora_train)
|
387 |
* [Training Science on Rentry](https://rentry.org/lora-training-science)
|
388 |
+
* [Original Kohya Trainer (Dreambooth method)](https://colab.research.google.com/github/Linaqruf/kohya-trainer/blob/main/kohya-LoRA-dreambooth.ipynb)
|
389 |
* [List of trainer parameters](https://github.com/derrian-distro/LoRA_Easy_Training_Scripts#list-of-arguments)
|
390 |
|
391 |
+
With those way smarter resources out of the way, I'll try to produce a short and simple guide for you to make your own character, artstyle, or concept Lora.
|
392 |
|
393 |
1. We will be using [this collab document](https://github.com/derrian-distro/LoRA_Easy_Training_Scripts#list-of-arguments). You can copy it into your own Google Drive if you want.
|
394 |
|
|
|
433 |
|
434 |
![Comparison of Lora training results](images/loratrain.png)
|
435 |
|
436 |
+
Look at that, it gets more detailed over time! The last image is without any Lora for comparison. This was a successful character Lora, at least at first glance. You would need to test different seeds, prompts and scenarios to be sure.
|
437 |
|
438 |
+
It is common that your Lora "fries" or distorts your images when used at high weights such as 1, specially if it's overcooked. A weight of 0.5 to 0.8 is acceptable here, you may need to tweak the learning rate and network dim for this, or other variables not found in this collab. If you're reading this and know the magic sauce, let us know.
|
439 |
|
440 |
+
After getting used to making Loras, and hopefully interacting with various resources and the community, you will be ready to use a different method including the [advanced all-in-one collab by kohya](https://colab.research.google.com/github/Linaqruf/kohya-trainer/blob/main/kohya-LoRA-dreambooth.ipynb). Good luck.
|
441 |
|
442 |
+
* **Additional Lora tips** <a name="trainchars"></a>[▲](#index)
|
443 |
+
|
444 |
+
The most important thing for characters and concepts is the tags. You want a varied dataset of images in different poses and such, sure, but if they're tagged incorrectly it's not gonna work.
|
445 |
+
|
446 |
+
When training a character or concept lora you should set `keep_tokens` to 1, and ensure that the first tag in your text files is always your **activation tag**. An activation tag is how we'll invoke your Lora to work.
|
447 |
+
|
448 |
+
Having done that, you want to remove or "prune" tags that are intrinsic to your character or concept. For example, if a character always has cat ears, you want to remove tags such as `animal ears, animal ear fluff, cat ears`, etc. This way they become "absorbed" by your activation tag.
|
449 |
+
|
450 |
+
You may also prune clothing tags, by only listing the most relevant clothes in the tags and remove anything redundant, such as keeping "tie" but removing "red tie". This will make those clothes absorb the relevant details as well. You can even define an additional activation tag for each set of important clothes, eg. character-default, character-bikini, etc. But there's more than one way to do it. In any case, with the correct usage of tags, your character should easily be able to change clothes.
|
451 |
+
|
452 |
+
Style Loras meanwhile don't really need an activation tag, as they should always be active. They will absorb the artstyle naturally, and will work at varying weights.
|
453 |
+
|
454 |
+
This "absorption" of details not provided by tags is also how Loras work at all, by representing things normally imperceptible or hard to describe like faces, accessories, brushstrokes, etc.
|
455 |
|
456 |
|
457 |
|
|
|
459 |
|
460 |
That's it, that's the end of this guide for now. Thank you for reading. If you want to correct me or contribute to the guide you can open an issue or pull request and I'll take a look soon.
|
461 |
|
462 |
+
I have [a separate repo that aggregates vtuber Loras, specially Hololive](https://huggingface.co/hollowstrawberry/holotard). If you're interested in that.
|
463 |
|
464 |
Cheers.
|
465 |
|