Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,53 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
tags:
|
3 |
- text-to-image
|
|
|
1 |
+
## Introduction
|
2 |
+
|
3 |
+
The Stable Diffusion XL model is finetuned on comtemporatory Chinese ink paintings.
|
4 |
+
|
5 |
+
## Usage
|
6 |
+
Our inference process is speed up using [**LCM-LORA**](https://huggingface.co/latent-consistency/lcm-lora-sdxl), please make sure all the necessary libraries are up to date.
|
7 |
+
```Python
|
8 |
+
pip install --upgrade pip
|
9 |
+
pip install --upgrade diffusers transformers accelerate peft
|
10 |
+
```
|
11 |
+
# Text to Image
|
12 |
+
|
13 |
+
Text-to-Image
|
14 |
+
|
15 |
+
Here, we should load two adapters, **LCM-LORA** for sample accleration and **Chinese_Ink_LORA** for styled rendering with it's base model stabilityai/stable-diffusion-xl-base-1.0.
|
16 |
+
Next, the scheduler needs to be changed to LCMScheduler and we can reduce the number of inference steps to just 2 to 8 steps(8 used in my experiment).
|
17 |
+
|
18 |
+
```Python
|
19 |
+
import torch
|
20 |
+
from diffusers import DiffusionPipeline, LCMScheduler
|
21 |
+
|
22 |
+
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0",
|
23 |
+
variant="fp16",
|
24 |
+
torch_dtype=torch.float16
|
25 |
+
).to("cuda")
|
26 |
+
# set scheduler
|
27 |
+
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
|
28 |
+
|
29 |
+
# load LoRAs
|
30 |
+
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm")
|
31 |
+
pipe.load_lora_weights("ming-yang/sdxl_chinese_ink_lora", adapter_name="Chinese Ink")
|
32 |
+
|
33 |
+
# Combine LoRAs
|
34 |
+
pipe.set_adapters(["lcm", "Chinese Ink"], adapter_weights=[1.0, 0.8])
|
35 |
+
|
36 |
+
prompts = ["Chinese Ink, mona lisa picture, 8k", "mona lisa, 8k"]
|
37 |
+
generator = torch.manual_seed(1)
|
38 |
+
images = [pipe(prompt, num_inference_steps=8, guidance_scale=1, generator=generator).images[0] for prompt in prompts]
|
39 |
+
|
40 |
+
fig, axs = plt.subplots(1, 2, figsize=(40, 20))
|
41 |
+
|
42 |
+
axs[0].imshow(images[0])
|
43 |
+
axs[0].axis('off') # 不显示坐标轴
|
44 |
+
|
45 |
+
axs[1].imshow(images[1])
|
46 |
+
axs[1].axis('off')
|
47 |
+
plt.show()
|
48 |
+
```
|
49 |
+
!(images/comparison.png)
|
50 |
+
|
51 |
---
|
52 |
tags:
|
53 |
- text-to-image
|