Commit
•
4793ae4
1
Parent(s):
a39e446
[Diffusers docs] Correct some diffusers code snippet (#16)
Browse files- Correct some diffusers docs (c468d0c3e50feb99266c88ff216754031bb80e4c)
Co-authored-by: Patrick von Platen <patrickvonplaten@users.noreply.huggingface.co>
README.md
CHANGED
@@ -48,7 +48,7 @@ The SDXL base model performs significantly better than the previous variants, an
|
|
48 |
|
49 |
### 🧨 Diffusers
|
50 |
|
51 |
-
Make sure to upgrade diffusers to >= 0.
|
52 |
```
|
53 |
pip install diffusers --upgrade
|
54 |
```
|
@@ -58,7 +58,8 @@ In addition make sure to install `transformers`, `safetensors`, `accelerate` as
|
|
58 |
pip install invisible_watermark transformers accelerate safetensors
|
59 |
```
|
60 |
|
61 |
-
|
|
|
62 |
```py
|
63 |
from diffusers import DiffusionPipeline
|
64 |
import torch
|
@@ -74,6 +75,48 @@ prompt = "An astronaut riding a green horse"
|
|
74 |
images = pipe(prompt=prompt).images[0]
|
75 |
```
|
76 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
77 |
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
|
78 |
```py
|
79 |
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
|
@@ -87,6 +130,7 @@ instead of `.to("cuda")`:
|
|
87 |
+ pipe.enable_model_cpu_offload()
|
88 |
```
|
89 |
|
|
|
90 |
|
91 |
### Optimum
|
92 |
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
|
|
|
48 |
|
49 |
### 🧨 Diffusers
|
50 |
|
51 |
+
Make sure to upgrade diffusers to >= 0.19.0:
|
52 |
```
|
53 |
pip install diffusers --upgrade
|
54 |
```
|
|
|
58 |
pip install invisible_watermark transformers accelerate safetensors
|
59 |
```
|
60 |
|
61 |
+
To just use the base model, you can run:
|
62 |
+
|
63 |
```py
|
64 |
from diffusers import DiffusionPipeline
|
65 |
import torch
|
|
|
75 |
images = pipe(prompt=prompt).images[0]
|
76 |
```
|
77 |
|
78 |
+
To use the whole base + refiner pipeline as an ensemble of experts you can run:
|
79 |
+
|
80 |
+
```py
|
81 |
+
from diffusers import DiffusionPipeline
|
82 |
+
import torch
|
83 |
+
|
84 |
+
# load both base & refiner
|
85 |
+
base = DiffusionPipeline.from_pretrained(
|
86 |
+
"stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16, variant="fp16", use_safetensors=True
|
87 |
+
)
|
88 |
+
base.to("cuda")
|
89 |
+
refiner = DiffusionPipeline.from_pretrained(
|
90 |
+
"stabilityai/stable-diffusion-xl-refiner-1.0",
|
91 |
+
text_encoder_2=base.text_encoder_2,
|
92 |
+
vae=base.vae,
|
93 |
+
torch_dtype=torch.float16,
|
94 |
+
use_safetensors=True,
|
95 |
+
variant="fp16",
|
96 |
+
)
|
97 |
+
refiner.to("cuda")
|
98 |
+
|
99 |
+
# Define how many steps and what % of steps to be run on each experts (80/20) here
|
100 |
+
n_steps = 40
|
101 |
+
high_noise_frac = 0.8
|
102 |
+
|
103 |
+
prompt = "A majestic lion jumping from a big stone at night"
|
104 |
+
|
105 |
+
# run both experts
|
106 |
+
image = base(
|
107 |
+
prompt=prompt,
|
108 |
+
num_inference_steps=n_steps,
|
109 |
+
denoising_end=high_noise_frac,
|
110 |
+
output_type="latent",
|
111 |
+
).images
|
112 |
+
image = refiner(
|
113 |
+
prompt=prompt,
|
114 |
+
num_inference_steps=n_steps,
|
115 |
+
denoising_start=high_noise_frac,
|
116 |
+
image=image,
|
117 |
+
).images[0]
|
118 |
+
```
|
119 |
+
|
120 |
When using `torch >= 2.0`, you can improve the inference speed by 20-30% with torch.compile. Simple wrap the unet with torch compile before running the pipeline:
|
121 |
```py
|
122 |
pipe.unet = torch.compile(pipe.unet, mode="reduce-overhead", fullgraph=True)
|
|
|
130 |
+ pipe.enable_model_cpu_offload()
|
131 |
```
|
132 |
|
133 |
+
For more information on how to use Stable Diffusion XL, please have a look at [the Stable Diffusion XL Docs](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl).
|
134 |
|
135 |
### Optimum
|
136 |
[Optimum](https://github.com/huggingface/optimum) provides a Stable Diffusion pipeline compatible with both [OpenVINO](https://docs.openvino.ai/latest/index.html) and [ONNX Runtime](https://onnxruntime.ai/).
|