On the Diffusers code example
AutoPipelineForText2Image.from_pretrained("dataautogpt3/FluxteusV1", torch_dtype=torch.bfloat16).to('cuda')
Will not work, since it doesn't have the Diffusers file structure.
AutoPipelineForText2Image.from_single_file(""https://huggingface.co/dataautogpt3/FluxteusV1/blob/main/Fluxteus.safetensors"", torch_dtype=torch.bfloat16).to('cuda')
from_single_file is unsupported for AutoPipelineForText2Image.
FluxPipeline.from_single_file is unsupported.
transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/dataautogpt3/FluxteusV1/blob/main/Fluxteus.safetensors", torch_dtype=dtype)
Is working.
So trying a Diffusers example using that:
import torch
from diffusers import FluxTransformer2DModel, FluxPipeline
from transformers import T5EncoderModel, CLIPTextModel
bfl_repo = "black-forest-labs/FLUX.1-dev"
dtype = torch.bfloat16
transformer = FluxTransformer2DModel.from_single_file("https://huggingface.co/dataautogpt3/FluxteusV1/blob/main/Fluxteus.safetensors", torch_dtype=dtype)
text_encoder_2 = T5EncoderModel.from_pretrained(bfl_repo, subfolder="text_encoder_2", torch_dtype=dtype)
pipe = FluxPipeline.from_pretrained(bfl_repo, transformer=None, text_encoder_2=None, torch_dtype=dtype)
pipe.transformer = transformer
pipe.text_encoder_2 = text_encoder_2
pipe.enable_model_cpu_offload()
prompt = "A cat holding a sign that says hello world"
image = pipe(
prompt,
guidance_scale=3.5,
output_type="pil",
num_inference_steps=20,
generator=torch.Generator("cpu").manual_seed(0)
).images[0]
image.save("flux.png")
Is not working either. And I guess, looking at the size of your weights, you got everything included.
So, the question still stands, how to use your weights with Diffusers?
cc @sayakpaul @OzzyGT , somehow this Lora merged weights don’t work with diffusers or the converter
Excuse me for the sidebar.
The reason your safetensors cannot be converted by Diffusers is probably because you merged them with ComfyUI.
The key has changed.
I think a simple code like the following will fix it, but it would be better for later to have it mounted in the from_single_file of Diffusers.
from safetensors.torch import load_file, save_file
state_dict = load_file("your.safetensors", device="cpu")
for k, v in state_dict.items():
state_dict[k.replace("vae.", "").replace("model.diffusion_model.", "")\
.replace("text_encoders.clip_l.transformer.text_model.", "")\
.replace("text_encoders.t5xxl.transformer.", "")] = v
save_file(state_dict, "your_fixed.safetensors")
See also:
My converter
https://huggingface.co/spaces/John6666/flux-to-diffusers-test
A Converted example
https://huggingface.co/John6666/blue-pencil-flux1-v001-fp8-flux
I would be honored, but I am not a programmer and have been blank as an amateur for a long time, so I am not suited to fiddle with too much critical code.
I don't even have a github account.
I originally noticed it when I saw the "vae.~" crap in your (sayakpaul) quantization-related post, and I'll be happy as long as Diffusers can handle it.🤗
So I'll leave the implementation to you.
There were some keys with scary names like "shared" or something, but as far as FLUX.1 is concerned, there seems to be no duplication of key names between modules, so pure replacement is fine.
Alright. We will make sure to credit you then :)
Thanks. Good luck.😀
I'll put the key analysis results (just printed) for your reference.
https://huggingface.co/spaces/John6666/flux-to-diffusers-test/blob/main/fluxunchainedArtfulNSFW_fuT516xfp8E4m3fnV11_fixed.safetensors.old.txt.txt
https://huggingface.co/spaces/John6666/flux-to-diffusers-test/blob/main/fluxunchainedArtfulNSFW_fuT516xfp8E4m3fnV11_fixed.safetensors.new.txt.txt