New discussion

Which model is better for low vram/ram?

#32 opened about 2 months ago by younyokel

fp8 inference

1
#26 opened 3 months ago by Melody32768

wrong model

#25 opened 3 months ago by sunhaha123

Update README.md

#24 opened 3 months ago by WBD8

Unet?

#22 opened 4 months ago by aiRabbit0

quite slow to load the fp8 model

11
#21 opened 4 months ago by gpt3eth

How to load into VRAM?

2
#19 opened 4 months ago by MicahV

'float8_e4m3fn' attribute error

6
#17 opened 4 months ago by Magenta6

Loading flux-fp8 with diffusers

1
#16 opened 4 months ago by 8au

Quantization Method?

9
#7 opened 4 months ago by vyralsurfer

ComfyUi Workflow

1
#6 opened 4 months ago by Jebari

Diffusers?

19
#4 opened 4 months ago by tintwotin

FP16

1
#2 opened 4 months ago by bsbsbsbs112321

Metadata lost from model

4
#1 opened 4 months ago by mcmonkey