ming-yang's picture
Update README.md
7818c40
---
tags:
- text-to-image
- stable-diffusion
- lora
- diffusers
widget:
- text: Chinese Ink, The girl with a pearl earring, 8k
output:
url: images/Chinese Ink, The girl with a pearl earring, 8k.png
- text: Chinese Ink,a cute fox
output:
url: images/Chinese Ink,a cute fox.png
- text: Chinese Ink, Mona Lisa, 8k
output:
url: images/Chinese Ink, Mona Lisa, 8k.png
- text: Chinese Ink,lotus pond in summer rain
output:
url: images/Chinese Ink,lotus pond in summer rain.png
- text: Chinese Ink, Wild Geese Descending on a Sandbank, 8k
output:
url: images/Chinese Ink, Wild Geese Descending on a Sandbank, 8k.png
- text: Chinese Ink, the Paris skyline and the Eiffel Tower
output:
url: images/Chinese Ink, the Paris skyline and the Eiffel Tower.png
- text: Chinese Ink, a lovely rabbit
parameters:
negative prompt: blurry, extra limb, bad anatomy
output:
url: images/Chinese Ink, a lovely rabbit.png
- text: Chinese Ink, a tree with colorful leaves in autumn, 8k
parameters:
negative prompt: blurry, extra limb, bad anatomy
output:
url: images/a tree with colorful leaves in autumn.png
base_model: stabilityai/stable-diffusion-xl-base-1.0
instance_prompt: Chinese Ink
license: creativeml-openrail-m
pipeline_tag: text-to-image
library_name: diffusers
---
# Chinese Ink Painting
## Examples
<Gallery />
## Introduction
The [**Stable Diffusion XL**](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0) model is finetuned on comtemporatory Chinese ink paintings.
## Usage
Our inference process is speed up using [**LCM-LORA**](https://huggingface.co/latent-consistency/lcm-lora-sdxl), please make sure all the necessary libraries are up to date.
```Python
pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate peft
pip install matplotlib
```
## Text to Image
Here, we should load two adapters, **LCM-LORA** for sample accleration and **Chinese_Ink_LORA** for styled rendering with it's base model stabilityai/stable-diffusion-xl-base-1.0.
Next, the scheduler needs to be changed to LCMScheduler and we can reduce the number of inference steps to just 2 to 8 steps(8 used in my experiment).
```Python
import torch
from diffusers import DiffusionPipeline, LCMScheduler
import matplotlib.pyplot as plt
pipe = DiffusionPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0",
variant="fp16",
torch_dtype=torch.float16
).to("cuda")
# set scheduler
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
# load LoRAs
pipe.load_lora_weights("latent-consistency/lcm-lora-sdxl", adapter_name="lcm")
pipe.load_lora_weights("ming-yang/sdxl_chinese_ink_lora", adapter_name="Chinese Ink")
# Combine LoRAs
pipe.set_adapters(["lcm", "Chinese Ink"], adapter_weights=[1.0, 0.8])
prompts = ["Chinese Ink, mona lisa picture, 8k", "mona lisa, 8k"]
generator = torch.manual_seed(1)
images = [pipe(prompt, num_inference_steps=8, guidance_scale=1, generator=generator).images[0] for prompt in prompts]
fig, axs = plt.subplots(1, 2, figsize=(40, 20))
axs[0].imshow(images[0])
axs[0].axis('off') # 不显示坐标轴
axs[1].imshow(images[1])
axs[1].axis('off')
plt.show()
```
## Trigger words
You should use **`Chinese Ink`** to trigger the image generation.
## Download model
Weights for this model are available in Safetensors format.
[Download](/ming-yang/sdxl_chinese_ink_lora/tree/main) them in the Files & versions tab.