hollowstrawberry
commited on
Commit
•
6e8be22
1
Parent(s):
4aa33dc
Update README.md
Browse files
README.md
CHANGED
@@ -43,7 +43,7 @@ The images you create may be used for any purpose, depending on the used model's
|
|
43 |
|
44 |
The easiest way to use Stable Diffusion is through Google Collab. It borrows Google's computers to use AI, with variable time limitations, usually a few hours every day. You will need a Google account (or several, wink wink) and we will be using Google Drive for ease of access.
|
45 |
|
46 |
-
If you instead want to run it on your own computer, [scroll down](#install).
|
47 |
|
48 |
1. Enter [this page](https://colab.research.google.com/drive/1wEa-tS10h4LlDykd87TF5zzpXIIQoCmq).
|
49 |
|
@@ -88,7 +88,7 @@ To run Stable Diffusion on your own computer you'll need at least 16 GB of RAM a
|
|
88 |
|
89 |
Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
|
90 |
The top of your page should look something like this:
|
91 |
-
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/top.png"/>
|
92 |
Here you can select your model and VAE. We will go over what these are and how you can get more of them.
|
93 |
|
94 |
|
@@ -137,7 +137,7 @@ Here you can select your model and VAE. We will go over what these are and how y
|
|
137 |
* **EasyNegative:** The negative prompts above use EasyNegative, which is a *textual inversion embedding* or "magic word" that codifies many bad things to make your images better. Typically one would write a very long, very specific, very redundant, and sometimes silly negative prompt. EasyNegative is as of March 2023 the best choice if you want to avoid that.
|
138 |
* [Get EasyNegative here](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.pt). For collab, paste the link into the `custom_urls` text box. For Windows, put it in your `stable-diffusion-webui/embeddings` folder. Then, go to the bottom of your WebUI page and click *Reload UI*. It will now work when you type the word.
|
139 |
|
140 |
-
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/prompt.png"/>
|
141 |
|
142 |
After a "base prompt" like the above, you may then start typing what you want. For example `young woman in a bikini in the beach, full body shot`. Feel free to add other terms you don't like to your negatives such as `old, ugly, futanari, furry`, etc.
|
143 |
You can also save your prompts to reuse later with the buttons below Generate. Click the small 💾 *Save style* button and give it a name. Later, you can open your *Styles* dropdown to choose, then click 📋 *Apply selected styles to the current prompt*.
|
@@ -147,7 +147,7 @@ Here you can select your model and VAE. We will go over what these are and how y
|
|
147 |
1. **Generation parameters** <a name="gen"></a>[â–²](#index)
|
148 |
|
149 |
The rest of the parameters in the starting page will look something like this:
|
150 |
-
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/parameters.png"/>
|
151 |
|
152 |
* **Sampling method:** These dictate how your image is formulated, and each produce different results. The default of `Euler a` is almost always the best. There are also very good results for `DPM++ 2M Karras` and `DPM++ SDE Karras`.
|
153 |
* **Sampling steps:** These are "calculated" beforehand, and so more steps doesn't always mean more detail. I always go with 30, you may go from 20-50 and find good results.
|
@@ -171,7 +171,7 @@ Here you can select your model and VAE. We will go over what these are and how y
|
|
171 |
# Extensions <a name="extensions"></a>[â–²](#index)
|
172 |
|
173 |
*Stable Diffusion WebUI* supports extensions to add additional functionality and quality of life. These can be added by going into the **Extensions** tab, then **Install from URL**, and pasting the links found here or elsewhere. Then, click *Install* and wait for it to finish. Then, go to **Installed** and click *Apply and restart UI*.
|
174 |
-
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/extensions.png"/>
|
175 |
|
176 |
Here are some useful extensions. Most of these come installed in the collab, and I hugely recommend you manually add the first 2 if you're running locally:
|
177 |
* [Image Browser (fixed fork)](https://github.com/aka7774/sd_images_browser) - This will let you browse your past generated images very efficiently, as well as directly sending their prompts and parameters back to txt2img, img2img, etc.
|
@@ -192,7 +192,7 @@ Loras can represent a character, an artstyle, poses, clothes, or even a human fa
|
|
192 |
|
193 |
Place your lora files in the `stable-diffusion-webui/models/Lora` folder, or paste the direct download link into the `custom_urls` text box in collab. Then, look for the 🎴 *Show extra networks* button below the big orange Generate button. It will open a new section. Click on the Lora tab and press the **Refresh** button, and your loras should appear. When you click a Lora in that menu it will get added to your prompt, looking like this: `<lora:filename:1>`. The start is always the same. The filename will be the exact filename in your system without the `.safetensors` extension. Finally, the number is the weight, like we saw in [Prompts ▲](#prompt). Most Loras work between 0.5 and 1 weight, and too high values might "fry" your image, specially if using multiple Loras at the same time.
|
194 |
|
195 |
-
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/extranetworks.png"/>
|
196 |
|
197 |
An example of a Lora is [Thicker Lines Anime Style](https://civitai.com/models/13910/thicker-lines-anime-style-lora-mix), which is perfect if you want your images to look more like traditional anime.
|
198 |
|
|
|
43 |
|
44 |
The easiest way to use Stable Diffusion is through Google Collab. It borrows Google's computers to use AI, with variable time limitations, usually a few hours every day. You will need a Google account (or several, wink wink) and we will be using Google Drive for ease of access.
|
45 |
|
46 |
+
If you instead want to run it on your own computer, [scroll down â–¼](#install).
|
47 |
|
48 |
1. Enter [this page](https://colab.research.google.com/drive/1wEa-tS10h4LlDykd87TF5zzpXIIQoCmq).
|
49 |
|
|
|
88 |
|
89 |
Before or after generating your first few images, you will want to take a look at the information below to improve your experience and results.
|
90 |
The top of your page should look something like this:
|
91 |
+
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/top.png"/>
|
92 |
Here you can select your model and VAE. We will go over what these are and how you can get more of them.
|
93 |
|
94 |
|
|
|
137 |
* **EasyNegative:** The negative prompts above use EasyNegative, which is a *textual inversion embedding* or "magic word" that codifies many bad things to make your images better. Typically one would write a very long, very specific, very redundant, and sometimes silly negative prompt. EasyNegative is as of March 2023 the best choice if you want to avoid that.
|
138 |
* [Get EasyNegative here](https://huggingface.co/datasets/gsdf/EasyNegative/resolve/main/EasyNegative.pt). For collab, paste the link into the `custom_urls` text box. For Windows, put it in your `stable-diffusion-webui/embeddings` folder. Then, go to the bottom of your WebUI page and click *Reload UI*. It will now work when you type the word.
|
139 |
|
140 |
+
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/prompt.png"/>
|
141 |
|
142 |
After a "base prompt" like the above, you may then start typing what you want. For example `young woman in a bikini in the beach, full body shot`. Feel free to add other terms you don't like to your negatives such as `old, ugly, futanari, furry`, etc.
|
143 |
You can also save your prompts to reuse later with the buttons below Generate. Click the small 💾 *Save style* button and give it a name. Later, you can open your *Styles* dropdown to choose, then click 📋 *Apply selected styles to the current prompt*.
|
|
|
147 |
1. **Generation parameters** <a name="gen"></a>[â–²](#index)
|
148 |
|
149 |
The rest of the parameters in the starting page will look something like this:
|
150 |
+
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/parameters.png"/>
|
151 |
|
152 |
* **Sampling method:** These dictate how your image is formulated, and each produce different results. The default of `Euler a` is almost always the best. There are also very good results for `DPM++ 2M Karras` and `DPM++ SDE Karras`.
|
153 |
* **Sampling steps:** These are "calculated" beforehand, and so more steps doesn't always mean more detail. I always go with 30, you may go from 20-50 and find good results.
|
|
|
171 |
# Extensions <a name="extensions"></a>[â–²](#index)
|
172 |
|
173 |
*Stable Diffusion WebUI* supports extensions to add additional functionality and quality of life. These can be added by going into the **Extensions** tab, then **Install from URL**, and pasting the links found here or elsewhere. Then, click *Install* and wait for it to finish. Then, go to **Installed** and click *Apply and restart UI*.
|
174 |
+
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/extensions.png"/>
|
175 |
|
176 |
Here are some useful extensions. Most of these come installed in the collab, and I hugely recommend you manually add the first 2 if you're running locally:
|
177 |
* [Image Browser (fixed fork)](https://github.com/aka7774/sd_images_browser) - This will let you browse your past generated images very efficiently, as well as directly sending their prompts and parameters back to txt2img, img2img, etc.
|
|
|
192 |
|
193 |
Place your lora files in the `stable-diffusion-webui/models/Lora` folder, or paste the direct download link into the `custom_urls` text box in collab. Then, look for the 🎴 *Show extra networks* button below the big orange Generate button. It will open a new section. Click on the Lora tab and press the **Refresh** button, and your loras should appear. When you click a Lora in that menu it will get added to your prompt, looking like this: `<lora:filename:1>`. The start is always the same. The filename will be the exact filename in your system without the `.safetensors` extension. Finally, the number is the weight, like we saw in [Prompts ▲](#prompt). Most Loras work between 0.5 and 1 weight, and too high values might "fry" your image, specially if using multiple Loras at the same time.
|
194 |
|
195 |
+
<img src="https://huggingface.co/hollowstrawberry/stable-diffusion-guide/resolve/main/images/extranetworks.png"/>
|
196 |
|
197 |
An example of a Lora is [Thicker Lines Anime Style](https://civitai.com/models/13910/thicker-lines-anime-style-lora-mix), which is perfect if you want your images to look more like traditional anime.
|
198 |
|