New discussion

fp8 inference

#26 opened 20 days ago by Melody32768

wrong model

#25 opened 23 days ago by sunhaha123

Update README.md

#24 opened 25 days ago by WBD8

Unet?

#22 opened about 1 month ago by aiRabbit0

quite slow to load the fp8 model

11
#21 opened about 1 month ago by gpt3eth

How to load into VRAM?

2
#19 opened about 1 month ago by MicahV

'float8_e4m3fn' attribute error

3
#17 opened about 2 months ago by Magenta6

Loading flux-fp8 with diffusers

1
#16 opened about 2 months ago by 8au

FP8 Checkpoint version size mismatch?

2
#15 opened about 2 months ago by Thireus

Can this model be used on Apple Silicon?

21
#14 opened about 2 months ago by jsmidt

How to use fp8 models + original flux repo?

#13 opened about 2 months ago by rolux

Quantization Method?

7
#7 opened about 2 months ago by vyralsurfer

ComfyUi Workflow

1
#6 opened about 2 months ago by Jebari

Diffusers?

19
#4 opened about 2 months ago by tintwotin

Minimum vram requirements?

3
#3 opened about 2 months ago by joachimsallstrom

FP16

1
#2 opened about 2 months ago by bsbsbsbs112321

Metadata lost from model

4
#1 opened about 2 months ago by mcmonkey