File size: 3,434 Bytes
0ebcfd7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3c4d116
 
 
 
 
 
 
 
 
 
9478be1
 
 
cfc4da6
 
 
43a25cd
 
 
 
 
 
 
 
 
01155e9
 
93199c3
01155e9
 
 
 
93199c3
01155e9
93199c3
3c4d116
 
 
5c14787
3c4d116
 
 
 
 
 
 
 
 
 
 
 
 
 
 
5c14787
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
## Introduction

The Stable Diffusion XL model is finetuned on comtemporatory Chinese ink paintings.

## Usage
Our inference process is speed up using [**LCM-LORA**](https://huggingface.co/latent-consistency/lcm-lora-sdxl), please make sure all the necessary libraries are up to date.
```Python
pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate peft
```
# Text to Image

Text-to-Image

Here, we should load two adapters, **LCM-LORA** for sample accleration and **Chinese_Ink_LORA** for styled rendering with it's base model stabilityai/stable-diffusion-xl-base-1.0. 
Next, the scheduler needs to be changed to LCMScheduler and we can reduce the number of inference steps to just 2 to 8 steps(8 used in my experiment). 

```Python
import torch
from diffusers import DiffusionPipeline, LCMScheduler

pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0",
                                         variant="fp16",
                                         torch_dtype=torch.float16
                                         ).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)

# load LoRAs
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm")
pipe.load_lora_weights("ming-yang/sdxl_chinese_ink_lora", adapter_name="Chinese Ink")

# Combine LoRAs
pipe.set_adapters(["lcm", "Chinese Ink"], adapter_weights=[1.0, 0.8])

prompts = ["Chinese Ink, mona lisa picture, 8k", "mona lisa, 8k"]
generator = torch.manual_seed(1)
images = [pipe(prompt, num_inference_steps=8, guidance_scale=1, generator=generator).images[0] for prompt in prompts]

fig, axs = plt.subplots(1, 2, figsize=(40, 20))

axs[0].imshow(images[0])
axs[0].axis('off')  # 不显示坐标轴

axs[1].imshow(images[1])
axs[1].axis('off')
plt.show()
```
!(images/comparison.png)

---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
widget:
- text: Chinese Ink, The girl with a pearl earring, 8k
  output:
    url: images/Chinese Ink, The girl with a pearl earring, 8k.png
- text: Chinese Ink,a cute fox
  output:
    url: images/Chinese Ink,a cute fox.png
- text: Chinese Ink, Mona Lisa, 8k
  output:
    url: images/Chinese Ink, Mona Lisa, 8k.png
- text: Chinese Ink,lotus pond in summer rain
  output:
    url: images/Chinese Ink,lotus pond in summer rain.png
- text: Chinese Ink, Wild Geese Descending on a Sandbank, 8k
  output:
    url: images/Chinese Ink, Wild Geese Descending on a Sandbank, 8k.png
- text: Chinese Ink, the Paris skyline and the Eiffel Tower
  output:
    url: images/Chinese Ink, the Paris skyline and the Eiffel Tower.png
- text: Chinese Ink, a lovely rabbit
  parameters:
    negative prompt: blurry, extra limb, bad anatomy
  output:
    url: images/Chinese Ink, a lovely rabbit.png
- text: Chinese Ink, a tree with colorful leaves in autumn, 8k
  parameters:
    negative prompt: blurry, extra limb, bad anatomy
  output:
    url: images/a tree with colorful leaves in autumn.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Chinese Ink
license: creativeml-openrail-m
pipeline_tag: text-to-image
---
# Chinese_Ink_Painting

<Gallery />


## Trigger words

You should use `Chinese Ink` to trigger the image generation.


## Download model

Weights for this model are available in Safetensors format.

[Download](/ming-yang/sdxl_chinese_ink_lora/tree/main) them in the Files & versions tab.