|
--- |
|
license: other |
|
license_name: bria-2.3 |
|
license_link: https://bria.ai/bria-huggingface-model-license-agreement/ |
|
inference: false |
|
|
|
tags: |
|
- text-to-image |
|
- legal liability |
|
- commercial use |
|
- ip-adapter |
|
extra_gated_description: >- |
|
BRIA 2.3 IP-Adapter requires access to BRIA 2.3 |
|
Text-to-Image model |
|
extra_gated_heading: Fill in this form to get access |
|
extra_gated_fields: |
|
Name: |
|
type: text |
|
Company/Org name: |
|
type: text |
|
Org Type (Early/Growth Startup, Enterprise, Academy): |
|
type: text |
|
Role: |
|
type: text |
|
Country: |
|
type: text |
|
Email: |
|
type: text |
|
By submitting this form, I agree to BRIA’s Privacy policy and Terms & conditions, see links below: |
|
type: checkbox |
|
--- |
|
|
|
# BRIA 2.3 Image-Prompt |
|
|
|
BRIA 2.3 Image-Prompt enables the generation of high-quality images guided by an image as input, alongside (or instead of) the textual prompt. This allows for the creation of images inspired by the content or style of an existing images, which can be useful for the creation of image variations or for transferring the style or content of an image. This module uses the architecture of [IP-Adapter-Plus](https://huggingface.co/papers/2308.06721) and is trained on the foundation of [BRIA 2.3 Text-to-Image](https://huggingface.co/briaai/BRIA-2.3). |
|
|
|
This adapter can be used in combination with other adapters trained over our foundation model, such as [ControlNet-Depth](briaai/BRIA-2.3-ControlNet-Depth) or [ControlNet-Canny](briaai/BRIA-2.3-ControlNet-Canny). |
|
|
|
|
|
Similar to [BRIA 2.3](https://huggingface.co/briaai/BRIA-2.3), this adapter was trained from scratch exclusively on licensed data from our data partners. Therefore, it is safe for commercial use and provide full legal liability coverage for copyright and privacy infringement, as well as harmful content mitigation. That is, our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content. |
|
|
|
|
|
#### Image Variations (textual prompt: "high quality"): |
|
![Image Variations](https://huggingface.co/briaai/DEV-Image-Prompt/resolve/main/examples/image_variations.png) |
|
#### Style Transfer (textual prompt: "capybara"): |
|
![Style Transfer](https://huggingface.co/briaai/DEV-Image-Prompt/resolve/main/examples/style_transfer.png) |
|
|
|
### Model Description |
|
|
|
- **Developed by:** BRIA AI |
|
- **Model type:** [IP-Adapter](https://huggingface.co/docs/diffusers/using-diffusers/ip_adapter) for Latent diffusion |
|
- **License:** [Commercial licensing terms & conditions.](https://bria.ai/customer-general-terms-and-conditions) |
|
|
|
- **Model Description:** IP-Adapter for BRIA 2.3 Text-to-Image model. The model generates images guided by an image prompt. |
|
- **Resources for more information:** [BRIA AI](https://bria.ai/) |
|
|
|
Bria AI licenses the foundation model on which this model was trained, with full legal liability coverage. Our dataset does not contain copyrighted materials, such as fictional characters, logos, trademarks, public figures, harmful content, or privacy-infringing content. |
|
For more information, please visit our [website](https://bria.ai/). |
|
|
|
### Get Access |
|
Interested in BRIA 2.3? Purchase is required to license and access BRIA 2.3, ensuring royalty management with our data partners and full liability coverage for commercial use. |
|
|
|
Are you a startup or a student? We encourage you to apply for our [Startup Program](https://pages.bria.ai/the-visual-generative-ai-platform-for-builders-startups-plan?_gl=1*cqrl81*_ga*MTIxMDI2NzI5OC4xNjk5NTQ3MDAz*_ga_WRN60H46X4*MTcwOTM5OTMzNC4yNzguMC4xNzA5Mzk5MzM0LjYwLjAuMA..) to request access. This program are designed to support emerging businesses and academic pursuits with our cutting-edge technology. |
|
|
|
Contact us today to unlock the potential of BRIA 2.3! By submitting the form above, you agree to BRIA’s [Privacy policy](https://bria.ai/privacy-policy/) and [Terms & conditions](https://bria.ai/terms-and-conditions/). |
|
|
|
### Code example using Diffusers |
|
|
|
``` |
|
pip install diffusers |
|
``` |
|
|
|
|
|
```py |
|
from diffusers import AutoPipelineForText2Image |
|
from diffusers.utils import load_image |
|
import torch |
|
|
|
pipeline = AutoPipelineForText2Image.from_pretrained("briaai/BRIA-2.3", torch_dtype=torch.float16, force_zeros_for_empty_prompt=False).to("cuda") |
|
pipeline.load_ip_adapter("briaai/Image-Prompt", subfolder='models', weight_name="ip_adapter_bria.bin") |
|
|
|
``` |
|
## Create variations of the input image |
|
```py |
|
pipeline.set_ip_adapter_scale(1.0) |
|
image = load_image("examples/example1.jpg") |
|
generator = torch.Generator(device="cpu").manual_seed(0) |
|
images = pipeline( |
|
prompt="high quality", |
|
ip_adapter_image=image.resize((224, 224)), |
|
num_inference_steps=25, |
|
generator=generator, |
|
height=1024, width=1024, |
|
guidance_scale=7 |
|
).images |
|
images[0] |
|
``` |
|
|
|
## Use both image and textual prompt as inputs |
|
```py |
|
textual_prompt = "Paris, high quality" |
|
pipeline.set_ip_adapter_scale(0.7) |
|
image = load_image("examples/example2.jpg") |
|
generator = torch.Generator(device="cpu").manual_seed(0) |
|
images = pipeline( |
|
prompt=textual_prompt, |
|
ip_adapter_image=image.resize((224, 224)), |
|
num_inference_steps=25, |
|
generator=generator, |
|
height=1024, width=1024, |
|
guidance_scale=7 |
|
).images |
|
images[0] |
|
``` |
|
|
|
|
|
|
|
|
|
|
|
|
|
### Some tips for using our text-to-image model at inference: |
|
|
|
|
|
1. You must set `pipe.force_zeros_for_empty_prompt = False` |
|
2. For image variations, you can try setting an empty prompt. Also, you can add a negative prompt. |
|
3. We support multiple aspect ratios, yet resolution should overall consists approximately `1024*1024=1M` pixels, for example: |
|
`(1024,1024), (1280, 768), (1344, 768), (832, 1216), (1152, 832), (1216, 832), (960,1088)` |
|
4. Change the scale of the ip-adapter by using the "set_ip_adapter_scale()" method (range 0-1). The higher the scale, the closer the output will be to the input image. |
|
5. Resize the input image into a square, otherwise the CLIP image embedder will perform center-crop. |
|
|
|
|