Text-to-Image
Diffusers
lora
Edit model card

Segmind-VegaRT - Latent Consistency Model (LCM) LoRA of Segmind-Vega

Try real-time inference here VegaRT demoโšก

API for Segmind-VegaRT

Segmind-VegaRT a distilled consistency adapter for Segmind-Vega that allows to reduce the number of inference steps to only between 2 - 8 steps.

Latent Consistency Model (LCM) LoRA was proposed in LCM-LoRA: A universal Stable-Diffusion Acceleration Module by Simian Luo, Yiqin Tan, Suraj Patil, Daniel Gu et al.

Image comparison (Segmind-VegaRT vs SDXL-Turbo)

image/png

image/png

image/png

Speed comparison (Segmind-VegaRT vs SDXL-Turbo) on A100 80GB

image/png

Model Params / M
lcm-lora-sdv1-5 67.5
Segmind-VegaRT 119
lcm-lora-sdxl 197

Usage

LCM-LoRA is supported in ๐Ÿค— Hugging Face Diffusers library from version v0.23.0 onwards. To run the model, first install the latest version of the Diffusers library as well as peft, accelerate and transformers. audio dataset from the Hugging Face Hub:

pip install --upgrade pip
pip install --upgrade diffusers transformers accelerate peft

Text-to-Image

Let's load the base model segmind/Segmind-Vega first. Next, the scheduler needs to be changed to LCMScheduler and we can reduce the number of inference steps to just 2 to 8 steps. Please make sure to either disable guidance_scale or use values between 1.0 and 2.0.

import torch
from diffusers import LCMScheduler, AutoPipelineForText2Image

model_id = "segmind/Segmind-Vega"
adapter_id = "segmind/Segmind-VegaRT"

pipe = AutoPipelineForText2Image.from_pretrained(model_id, torch_dtype=torch.float16, variant="fp16")
pipe.scheduler = LCMScheduler.from_config(pipe.scheduler.config)
pipe.to("cuda")

# load and fuse lcm lora
pipe.load_lora_weights(adapter_id)
pipe.fuse_lora()


prompt = "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"

# disable guidance_scale by passing 0
image = pipe(prompt=prompt, num_inference_steps=4, guidance_scale=0).images[0]
Downloads last month
519
Inference Examples
Inference API (serverless) has been turned off for this model.

Model tree for segmind/Segmind-VegaRT

Adapter
(4)
this model

Spaces using segmind/Segmind-VegaRT 24