BRIA-2.3-ControlNet-Background-Generation, Model Card
BRIA 2.3 ControlNet-Background Generation, trained on the foundation of BRIA 2.3 Text-to-Image, enables the generation of high-quality images guided by a textual prompt and the extracted background mask estimation from an input image. This allows for the creation of different background variations of an image, all sharing the same foreground.
BRIA 2.3 was trained from scratch exclusively on licensed data from our esteemed data partners. Therefore, they are safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content.
Join our Discord community for more information, tutorials, tools, and to connect with other users!
Model Description
Developed by: BRIA AI
Model type: ControlNet for Latent diffusion
License: bria-2.3
Model Description: ControlNet Background-Generation for BRIA 2.3 Text-to-Image model. The model generates images guided by text and the background mask.
Resources for more information: BRIA AI
Get Access
BRIA 2.3 ControlNet-Background Generation requires access to BRIA 2.3 Text-to-Image. For more information, click here.
Usage
Installation
Install huggingface_hub and login if need to -
https://huggingface.co/docs/huggingface_hub/en/guides/cli#getting-started
https://huggingface.co/docs/huggingface_hub/en/quick-start#authentication
Download and install BRIA-2.3-ControlNet-BG-Gen
pip install -qr https://huggingface.co/briaai/BRIA-2.3-ControlNet-BG-Gen/resolve/main/requirements.txt
torch
torchvision
pillow
numpy
scikit-image
Diffusers==0.26.2
transformers>=4.39.1
huggingface-cli download briaai/BRIA-2.3-ControlNet-BG-Gen --include replace_bg/* --local-dir . --quiet
Run Inpainting script
import torch
from diffusers import (
AutoencoderKL,
EulerAncestralDiscreteScheduler,
)
from diffusers.utils import load_image
from replace_bg.model.pipeline_controlnet_sd_xl import StableDiffusionXLControlNetPipeline
from replace_bg.model.controlnet import ControlNetModel
from replace_bg.utilities import resize_image, remove_bg_from_image, paste_fg_over_image, get_control_image_tensor
controlnet = ControlNetModel.from_pretrained("briaai/BRIA-2.3-ControlNet-BG-Gen", torch_dtype=torch.float16)
vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
pipe = StableDiffusionXLControlNetPipeline.from_pretrained("briaai/BRIA-2.3", controlnet=controlnet, torch_dtype=torch.float16, vae=vae).to('cuda:0')
pipe.scheduler = EulerAncestralDiscreteScheduler(
beta_start=0.00085,
beta_end=0.012,
beta_schedule="scaled_linear",
num_train_timesteps=1000,
steps_offset=1
)
image_path = "https://farm5.staticflickr.com/4007/4322154488_997e69e4cf_z.jpg"
image = load_image(image_path)
image = resize_image(image)
mask = remove_bg_from_image(image_path)
control_tensor = get_control_image_tensor(pipe.vae, image, mask)
prompt = "in a zoo"
negative_prompt = "Logo,Watermark,Text,Ugly,Bad proportions,Bad quality,Out of frame,Mutation"
generator = torch.Generator(device="cuda:0").manual_seed(0)
gen_img = pipe(
negative_prompt=negative_prompt,
prompt=prompt,
controlnet_conditioning_scale=1.0,
num_inference_steps=50,
image = control_tensor,
generator=generator
).images[0]
result_image = paste_fg_over_image(gen_img, image, mask)
- Downloads last month
- 36