Edit model card

AnimagineXL-v3-openvino

This is an unofficial OpenVINO variant of cagliostrolab/animagine-xl-3.0.

The repo is provided for convenience of running the Animagine XL v3 model on Intel CPU/GPU, as loading & converting a SDXL model to openvino can be pretty slow (dozens of minutes).

Table of contents:

Usage

Take CPU for example:

from optimum.intel.openvino import OVStableDiffusionXLPipeline
from diffusers import (
    EulerAncestralDiscreteScheduler,
    DPMSolverMultistepScheduler
)

model_id = "CodeChris/AnimagineXL-v3-openvino"
pipe = OVStableDiffusionXLPipeline.from_pretrained(model_model)
# Fix output image size & batch_size for faster speed
img_w, img_h = 832, 1216  # Example
pipe.reshape(width=img_w, height=img_h,
             batch_size=1, num_images_per_prompt=1)

## Change scheduler
# AnimagineXL recommand Euler A:
# pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.scheduler = DPMSolverMultistepScheduler.from_config(
    pipe.scheduler.config,
    use_karras_sigmas=True,
    algorithm_type="dpmsolver++"
)  # I prefer DPM++ 2M Karras
# Turn off the filter
pipe.safety_checker = None

# If run on a GPU, you need:
# pipe.to('cuda')

After the pipe is prepared, a txt2img task can be executed as below:

prompt = "1girl, dress, day, masterpiece, best quality"
negative_prompt = "lowres, bad anatomy, bad hands, text, error, missing fingers, extra digit, fewer digits, cropped, worst quality, low quality, normal quality, jpeg artifacts, signature, watermark, username, blurry, artist name"

images = pipe(
    prompt,
    negative_prompt,
    # If reshaped, image size must equal the reshaped size
    width=img_w, height=img_h,
    guidance_scale=7,
    num_inference_steps=20
)
img = images[0]
img.save('sample.png')

For convenience, here is the recommended image sizes from the official AnimagineXL doc:

# Or their transpose
896 x 1152
832 x 1216
768 x 1344
640 x 1536
1024 x 1024

How the conversion was done

First, install optimum:

pip install --upgrade-strategy eager optimum[openvino,nncf]

Then, the repo is converted using the following command:

optimum-cli export openvino --model 'cagliostrolab/animagine-xl-3.0' 'models/openvino/AnimagineXL-v3' --task 'stable-diffusion-xl'

Appendix

Push large files without git commit the latest changes:

git lfs install
huggingface-cli lfs-enable-largefiles .
huggingface-cli upload --commit-message 'Upload model files' 'CodeChris/AnimagineXL-v3-openvino' .

Other notes:

  • The conversion was done using optimum==1.16.1 and openvino==2023.2.0.
  • You may query optimum-cli export openvino --help for more usage details.
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Examples
Unable to determine this model's library. Check the docs .

Model tree for CodeChris/AnimagineXL-v3-openvino