Maybe I'm gonna look like an under-educated peon by pointing this out, but...

#259
by Xephier102 - opened

Why is this thing so Fkin big..?
Not angry, just added the F for emphasis.

Seriously though. I don't even think it's done downloading, and the folder is already at 88 gigs. I just wanted to use a 6 gig flux model I got from civitai without having it throw errors about weights. My card is only 16 gigs. Is it even possible to run this?

NGL, I'll prob figure it out(if I can use it) before anyone gets back to this, but at least then it'll be here for the next person wondering. The file size just seems extreme for using a single model. To use SDXL, it only requires like, 5 gigs, and that's including the GUI program to run it.

Also, the complete lack of transparency in the konsole as to wtf it's installing and where, is more than a bit unnerving given it's size.. I wanted to generate a few images, not have my whole PC hijacked by BFL.. Maybe a bit paranoid, but paranoia is logical when transparency is absent.

@Xephier102 Well it’s big because if it was small, it would perform much worse. This model is basically one of the best, if not the best general model. You dont need to download everything btw, the folders are for diffusers, and the ae.sft is for their own repository.

If you are using diffusers, the diffusers code can automatically download what files you require.
Also, it won’t fit in 16g without quantization,
Quantization basically reduces model weights resulting in much lower vram usage at the cost of some accuracy.

You can use torchao int8 which is almost losless but fits in 16gb vram. There’s also 4bit which will fit in 8gb vram but lose a bit of detail. This is the int8 one.
https://huggingface.co/sayakpaul/flux.1-dev-int8-aot-

Use the inference py in the gist only.

If you are using comfyui, you can use gguf, same as above, 8bit is almost lossless and 4bit will lose a bit of detail.
https://huggingface.co/city96/FLUX.1-dev-gguf

@Xephier102 Well it’s big because if it was small, it would perform much worse. This model is basically one of the best, if not the best general model. You dont need to download everything btw, the folders are for diffusers, and the ae.sft is for their own repository.

If you are using diffusers, the diffusers code can automatically download what files you require.
Also, it won’t fit in 16g without quantization,
Quantization basically reduces model weights resulting in much lower vram usage at the cost of some accuracy.

You can use torchao int8 which is almost losless but fits in 16gb vram. There’s also 4bit which will fit in 8gb vram but lose a bit of detail. This is the int8 one.
https://huggingface.co/sayakpaul/flux.1-dev-int8-aot-

Use the inference py in the gist only.

If you are using comfyui, you can use gguf, same as above, 8bit is almost lossless and 4bit will lose a bit of detail.
https://huggingface.co/city96/FLUX.1-dev-gguf

Well, I did manage the 17 gig shnell model thx to the half vram feature, but there is a lot more to all this than the model, it's the text encoders, the tokenizers, the transformers, etc. the end result came to 107 gigs downloaded.

Can you please explain exactly what I need in order to use a 6 gig flux model that I downloaded from citivai? I'm also a bit confused about the model use, is it supposed to be a unet model or a checkpoint. I've noticed the unet node in comfy only has model output, so what do I use for clip? I mean, it's gotta be a checkpoint, otherwise what's the point in the text encoders?

I'd usually smash chatGPT with all these questions, but this thing is so new that even chatGPT seems a bit clueless.

One more though, where is this stuff supposed to be installed? There doesn't really seem to be any instruction on the model card, for all I know, I'd install it in the root folder and leave it there. I mean, I'm not that stupid, but I've made it as far as installing it into the ComfyUI folder, there it's all in the Flux dev folder in there, and installed it using my venv (as I typically do for any webui related stuff). Since the model isn't showing up under unet or checkpoint loader, I'm guessing I'd need to move the sft to one of those folders. Though I'm guessing that it prob wouldn't work if I did that without putting any of the other stuff in the right folders (I'd usually test that before making this statement, but I've gotta run outta the house for a bit).

I could download the guff you linked and attempt to use that, but I'm guessing then I would also still get the weight errors like when I try to run the one from citivai. I'm sure this is simpler than I'm making it out to be, I just need someone to point me along the right path. Also, the 8int one you linked, the URL is giving a 404. Dead link perhaps.

PS: thank you so much for any help you can offer.

Edit: Had a bit of time waiting on friend to message me, yea, I tried the flux dev model I had from citivai in checkpoint and got a long string of errors like..

size mismatch for img_in.weight: copying a param with shape torch.Size([98304, 1]) from checkpoint, the shape in current model is torch.Size([3072, 64]).
size mismatch for time_in.in_layer.weight: copying a param with shape torch.Size([393216, 1]) from checkpoint, the shape in current model is torch.Size([3072, 256]).
size mismatch for time_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).
size mismatch for vector_in.in_layer.weight: copying a param with shape torch.Size([1179648, 1]) from checkpoint, the shape in current model is torch.Size([3072, 768]).
size mismatch for vector_in.out_layer.weight: copying a param with shape torch.Size([4718592, 1]) from checkpoint, the shape in current model is torch.Size([3072, 3072]).

Had a little more time, and copied over the 23 gig flux model to checkpoints, upon trying to load it, I got a different error.

ERROR: Could not detect model type of: /home/xephier/Desktop/ComfyUI/models/checkpoints/flux1-dev.safetensors

I'm not sure how the finetuned (I'm guessing) models from citivai throw a different error from the base model, but it suggests that a different fix may be required for either one.

Edit: I managed to get the base model working. Turned out I just had to load the workflow from an image (Here: https://comfyanonymous.github.io/ComfyUI_examples/flux/), and the rest fell into place after that. I'll update this if I continue to have issues with the downloaded models from civitai, but for now, this works. The workflow is quite complex compared to the typical 5 node workflow of SDXL. I'm not sure how so many people use this model with so little actual help in the readme file.

Sign up or log in to comment