ljp commited on
Commit
a25439d
1 Parent(s): 98a22c5

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +68 -1
README.md CHANGED
@@ -11,4 +11,71 @@ tags:
11
  - ComfyUI
12
  - Inpainting
13
  library_name: diffusers
14
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
11
  - ComfyUI
12
  - Inpainting
13
  library_name: diffusers
14
+ ---
15
+
16
+ <div style="display: flex; justify-content: center; align-items: center;">
17
+ <img src="images/alibaba.png" alt="alibaba" style="width: 20%; height: auto; margin-right: 5%;">
18
+ <img src="images/alimama.png" alt="alimama" style="width: 20%; height: auto;">
19
+ </div>
20
+
21
+ # FLUX.1-dev ControlNet Inpainting - Beta
22
+
23
+ This repository hosts an improved Inpainting ControlNet checkpoint for the [FLUX.1-dev](https://huggingface.co/black-forest-labs/FLUX.1-dev) model, developed by the AlimamaCreative Team.
24
+
25
+ ## Key Enhancements
26
+
27
+ Our latest inpainting model brings significant improvements compared to the previous version:
28
+
29
+ 1. **1024 Resolution Support**: Capable of directly processing and generating 1024x1024 resolution images without additional upscaling steps, providing higher quality and more detailed output results.
30
+ 2. **Enhanced Detail Generation**: Fine-tuned to capture and reproduce finer details in inpainted areas.
31
+ 3. **Improved Prompt Control**: Offers more precise control over generated content through enhanced prompt interpretation.
32
+
33
+ ## Showcase
34
+
35
+ The following images were generated using a ComfyUI workflow with these settings (click here to download):
36
+ `control-strength` = 1.0, `control-end-percent` = 1.0, `true_cfg` = 1.0
37
+
38
+ | Image & Prompt Input | Alpha Version | Beta Version |
39
+ |:---:|:---:|:---:|
40
+ | ![Input Image](path/to/original_image.jpg) A > B | ![Alpha](path/to/old_model_result.jpg) | ![Beta](path/to/new_model_result.jpg) |
41
+
42
+ ### ComfyUI Usage Guidelines:
43
+
44
+ Download example ComfyUI workflow [here](https://huggingface.co/alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Alpha/resolve/main/images/alimama-flux-controlnet-inpaint.json).
45
+
46
+ - Using `t5xxl-FP16` and `flux1-dev-fp8` models for 28-step inference:
47
+ - GPU memory usage: 27GB
48
+ - Inference time: 27 seconds (cfg=3.5), 15 seconds (cfg=1)
49
+ - For optimal results, experiment with lower values for `control-strength`, `control-end-percent`, and `cfg
50
+
51
+ | Parameter | Recommended Range | Effect |
52
+ |-----------|------------------|--------|
53
+ | control-strength | 0.0 - 1.0 | Controls how much influence the ControlNet has on the generation. Higher values result in stronger adherence to the control image. |
54
+ | control-end-percent | 0.0 - 1.0 | Determines at which point in the denoising process the ControlNet influence ends. Lower values allow for more creative freedom in later steps. |
55
+ | cfg (Classifier-Free Guidance Scale) | 1.0 - 30.0 | Influences how closely the generation follows the prompt. Higher values increase prompt adherence but may reduce image quality. |
56
+
57
+ ## Model Specifications
58
+
59
+ - Training dataset: 15M images from LAION2B and proprietary sources
60
+ - Optimal inference resolution: 1024x1024
61
+
62
+ ## Diffusers Integration
63
+
64
+ 1. Install the required diffusers version:
65
+ ```shell
66
+ pip install diffusers==0.30.2
67
+ ```
68
+
69
+ 2. Clone this repository:
70
+ ````shell
71
+ git clone https://github.com/alimama-creative/FLUX-Controlnet-Inpainting.git
72
+ ````
73
+
74
+ 3. Configure `image_path`, `mask_path`, and `prompt` in `main.py`, then execute:
75
+ ````shell
76
+ python main.py
77
+ ````
78
+
79
+ ## License
80
+
81
+ Our model weights are released under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License.