Edit model card

Overview📃✏️

This is a Diffusers-compatible version of Yiffymix v51 by chilon249. See the original page for more information.

Keep in mind that this is SDXL-Lightning checkpoint model, so using fewer steps (around 12 to 25) and low guidance scale (around 4 to 6) is recommended for the best result. It's also recommended to use clip skip of 2.

This repository uses DPM++ 2M Karras as its sampling method (Diffusers only).

Check out the v52 here.

Diffusers Installation🧨

Dependencies Installation📁

First, you'll need to install few dependencies. This is a one-time operation, you only need to run the code once.

!pip install -q diffusers transformers accelerate

Model Installation💿

After the installation, you can run SDXL with this repository using the code below:

from diffusers import StableDiffusionXLPipeline
import torch

model = "IDK-ab0ut/Yiffymix_v51-XL"
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, torch_dtype=torch.float16).to("cuda")

prompt = "a cat, detailed background, dynamic lighting"
negative_prompt = "low resolution, bad quality, deformed"
steps = 25
guidance_scale = 4
image = pipeline(prompt=prompt, negative_prompt=negative_prompt,
        num_inference_steps=steps, guidance_scale=guidance_scale,
        clip_skip=2).images[0]
image

Feel free to edit the image's configuration with your desire.

Scheduler's Customization⚙️

ㅤㅤㅤㅤ🧨For Diffusers🧨

You can see all available schedulers here.

To use scheduler other than DPM++ 2M Karras for this repository, make sure to import the corresponding pipeline for the scheduler you want to use. For example, we want to use Euler. First, import EulerDiscreteScheduler from Diffusers by adding this line of code.

from diffusers import StableDiffusionXLPipeline, EulerDiscreteScheduler

Next step is to load the scheduler.

model = "IDK-ab0ut/Yiffymix_v51"
euler = EulerDiscreteScheduler.from_pretrained(
        model, subfolder="scheduler")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, scheduler=euler, torch.dtype=torch.float16
           ).to("cuda")

Now you can generate any images using the scheduler you want.

Another example is using DPM++ 2M SDE Karras. We want to import DPMSolverMultistepScheduler from Diffusers first.

from diffusers import StableDiffusionXLPipeline, DPMSolverMultistepScheduler

Next, load the scheduler into the model.

model = "IDK-ab0ut/Yiffymix_v51"
dpmsolver = DPMSolverMultistepScheduler.from_pretrained(
            model, subfolder="scheduler", use_karras_sigmas=True,
            algorithm_type="sde-dpmsolver++").to("cuda")
# 'use_karras_sigmas' is called to make the scheduler
# use Karras sigmas during sampling.
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, scheduler=dpmsolver, torch.dtype=torch.float16,
           ).to("cuda")

Variational Autoencoder (VAE) Installation🖼

There are two ways to get Variational Autoencoder (VAE) file into the model. The first one is to download the file manually and the second one is to remotely download the file using code. In this repository, I'll explain the method of using code as the efficient way. First step is to download the VAE file. You can download the file manually or remotely, but I recommend you to use the remote one. Usually, VAE files are in .safetensors format. There are two websites you can visit to download VAE. Those are HuggingFace and CivitAI.

From HuggingFace😊

This method is pretty straightforward. Pick any VAE's repository you like. Then, navigate to "Files" and the VAE's file. Make sure to click the file.

Click the "Copy Download Link" for the file, you'll need this.

Next step is to load AutoencoderKL pipeline into the code.

from diffusers import StableDiffusionXLPipeline, AutoencoderKL

Finally, load the VAE file into AutoencoderKL.

link = "your vae's link"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(link).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, vae=vae).to("cuda")

If you're using FP16 for the model, it's essential to also use FP16 for the VAE.

link = "your vae's link"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(
      link, torch_dtype=torch.float16).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, torch_dtype=torch.float16,
           vae=vae).to("cuda")

For manual download, just fill the link variable or any string variables containing the link of the file with path directory of the .safetensors.

Troubleshooting🔧

In case if you're experiencing HTTP404 error because the program can't resolve your link, here's a simple fix.

First, download huggingface_hub using pip.

!pip install --upgrade huggingface_hub

Import hf_hub_download() from huggingface_hub.

from huggingface_hub import hf_hub_download

Next, instead of direct link to the file, you want to use the repository ID.

repo = "username/model"
file = "the vae's file.safetensors"
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(
      hf_hub_download(repo_id=repo,
      filename=file)).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, vae=vae).to("cuda")
# use 'torch_dtype=torch.float16' for FP16.
# add 'subfolder="folder_name"' argument if the VAE is in specific folder.

You can use hf_hub_download() from huggingface_hub casually without the need to check if previous method returns HTTP404 error.

From CivitAI🇨

It's trickier if the VAE is in CivitAI, because you can't use from_single_file() method. It only works for files inside HuggingFace and local files only. You can upload the VAE from there into HuggingFace, but you must comply with the model's license before continuing. To solve this issue, you may use wget or curl command to get the file from outside HuggingFace.

Before downloading, to organize the VAE file you want to use and download, change the directory to save the downloaded model with cd. Use -O option before specifying the file's link and name. It's the same thing for both wget and curl.

# For 'wget'
!cd <path>; wget -O [filename.safetensors] <link>

# For 'curl'
!cd <path>; curl -O [filename.safetensors] <link>

# Use only one of them. Replace "filename" with any
# name you want. If you run the code in Command Prompt or
# Windows Shell, you don't need the exclamation mark (!).

Since the file is now in your local directory, you can finally use from_single_file() method normally. Make sure to input the correct path for your VAE file. Load the VAE file into AutoencoderKL.

path = "path to VAE" # Ends with .safetensors file format.
model = "IDK-ab0ut/Yiffymix_v51"
vae = AutoencoderKL.from_single_file(path).to("cuda")
pipeline = StableDiffusionXLPipeline.from_pretrained(
           model, vae=vae).to("cuda")

# Use 'torch_dtype=torch.float16' for both
# AutoencoderKL and SDXL pipeline for FP16.

Note: You can use wget and curl method to download files from HuggingFace.

Now you have it, loaded VAE from CivitAI.

Usage Restrictions📝

By using this repository, you agree to not use the model:

ㅤ1. In any way that violates any applicable national, federal, state, local or international law or regulation.
ㅤ2. For the purpose of exploiting, harming or attempting to exploit or harm minors in any way. 
ㅤ3. To generate or disseminate verifiably false information and/or content with the purpose of harming others. 
ㅤ4. To generate or disseminate personal identifiable information that can be used to harm an individual. 
ㅤ5. To defame, disparage or otherwise harass others. 
ㅤ6. For fully automated decision making that adversely impacts an individual’s legal rights or otherwise creates or modifies a binding, enforceable obligation. 
ㅤ7. For any use intended to or which has the effect of discriminating against or harming individuals or groups based on online or offline social behavior or known or predicted personal or personality characteristics. 
ㅤ8. To exploit any of the vulnerabilities of a specific group of persons based on their age, social, physical or mental characteristics, in order to materially distort the behavior of a person pertaining to that group in a manner that causes or is likely to cause that person or another person physical or psychological harm.
ㅤ9. For any use intended to or which has the effect of discriminating against individuals or groups based on legally protected characteristics or categories. 
ㅤ10. To provide medical advice and medical results interpretation. 
ㅤ11. To generate or disseminate information for the purpose to be used for administration of justice, law enforcement, immigration or asylum processes, such as predicting an individual will commit fraud/crime commitment (e.g. by text profiling, drawing causal relationships between assertions made in documents, indiscriminate and arbitrarily-targeted use).

You shall use this model only for creative and artistic approach, without any intentions that may cause harm for others.

That's all for this repository. Thank you for reading my silly note. Have a nice day!

Any helps or suggestions will be appreciated. Thank you!

Downloads last month
52
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.