File size: 6,196 Bytes
ec66d9f 9045596 ec66d9f aee78d9 ec66d9f a47a07c ec66d9f 10d8eff 5138e3c ec66d9f a47a07c ec66d9f a47a07c ec66d9f 4e1a5ec ec66d9f a47a07c 1cc505e ec66d9f 4e1a5ec ec66d9f a47a07c ec66d9f a47a07c ec66d9f a47a07c ec66d9f c7a9322 ec66d9f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 |
---
license: other
license_name: flux-1-dev-non-commercial-license
license_link: https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.
language:
- en
pipeline_tag: text-to-image
tags:
- Stable Diffusion
- image-generation
- Flux
- diffusers
---
![Controlnet collections for Flux](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/flux-controlnet-collections.png?raw=true)
[<img src="https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/light/join-our-discord-rev1.png?raw=true">](https://discord.gg/FHY2guThfy)
This repository provides a collection of ControlNet checkpoints for
[FLUX.1-dev model](https://huggingface.co/black-forest-labs/FLUX.1-dev) by Black Forest Labs
![Example Picture 1](./assets/depth_v2_res1.png?raw=true)
[See our github](https://github.com/XLabs-AI/x-flux-comfyui) for comfy ui workflows.
![Example Picture 1](https://github.com/XLabs-AI/x-flux-comfyui/blob/main/assets/image1.png?raw=true)
[See our github](https://github.com/XLabs-AI/x-flux) for train script, train configs and demo script for inference.
# Models
Our collection supports 3 models:
- Canny
- HED
- Depth (Midas)
Each ControlNet is trained on 1024x1024 resolution and works for 1024x1024 resolution.
We release **v2 versions** - better and realistic versions, which can be used directly in ComfyUI!
Please, see our [ComfyUI custom nodes installation guide](https://github.com/XLabs-AI/x-flux-comfyui)
# Examples
See examples of our models results below.
Also, some generation results with input images are provided in "Files and versions"
# Inference
To try our models, you have 2 options:
1. Use main.py from our [official repo](https://github.com/XLabs-AI/x-flux)
2. Use our custom nodes for ComfyUI and test it with provided workflows (check out folder /workflows)
See examples how to launch our models:
## Canny ControlNet (version 2)
1. Clone our [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui) custom nodes
2. Launch ComfyUI
3. Try our canny_workflow.json
![Example Picture 1](./assets/canny_v2_res1.png?raw=true)
![Example Picture 1](./assets/canny_v2_res2.png?raw=true)
![Example Picture 1](./assets/canny_v2_res3.png?raw=true)
## Canny ControlNet (version 1)
1. Clone [our repo](https://github.com/XLabs-AI/x-flux), install requirements
2. Launch main.py in command line with parameters
```bash
python3 main.py \
--prompt "a viking man with white hair looking, cinematic, MM full HD" \
--image input_image_canny.jpg \
--control_type canny \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-canny-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 1](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/canny_example_1.png?raw=true)
## Depth ControlNet (version 2)
1. Clone our [x-flux-comfyui](https://github.com/XLabs-AI/x-flux-comfyui) custom nodes
2. Launch ComfyUI
3. Try our depth_workflow.json
![Example Picture 1](./assets/depth_v2_res1.png?raw=true)
![Example Picture 1](./assets/depth_v2_res2.png?raw=true)
## Depth ControlNet (version 1)
1. Clone [our repo](https://github.com/XLabs-AI/x-flux), install requirements
2. Launch main.py in command line with parameters
```bash
python3 main.py \
--prompt "Photo of the bold man with beard and laptop, full hd, cinematic photo" \
--image input_image_depth1.jpg \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_1.png?raw=true)
```bash
python3 main.py \
--prompt "photo of handsome fluffy black dog standing on a forest path, full hd, cinematic photo" \
--image input_image_depth2.jpg \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_2.png?raw=true)
```bash
python3 main.py \
--prompt "Photo of japanese village with houses and sakura, full hd, cinematic photo" \
--image input_image_depth3.webp \
--control_type depth \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-depth-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 1024 --height 1024 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/depth_example_3.png?raw=true)
## HED ControlNet (version 1)
```bash
python3 main.py \
--prompt "2d art of a sitting african rich woman, full hd, cinematic photo" \
--image input_image_hed1.jpg \
--control_type hed \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-hed-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_steps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/hed_example_1.png?raw=true)
```bash
python3 main.py \
--prompt "anime ghibli style art of a running happy white dog, full hd" \
--image input_image_hed2.jpg \
--control_type hed \
--repo_id XLabs-AI/flux-controlnet-collections --name flux-hed-controlnet.safetensors --device cuda --use_controlnet \
--model_type flux-dev --width 768 --height 768 \
--timestep_to_start_cfg 1 --num_skkkteps 25 --true_gs 3.5 --guidance 4
```
![Example Picture 2](https://github.com/XLabs-AI/x-flux/blob/main/assets/readme/examples/hed_example_2.png?raw=true)
## License
Our weights fall under the [FLUX.1 [dev]](https://huggingface.co/black-forest-labs/FLUX.1-dev/blob/main/LICENSE.md) Non-Commercial License<br/> |