Edit model card

Usage

import base64

import requests

HF_TOKEN = 'hf_xxxxxxxxxxxxx'
API_ENDPOINT = 'https://xxxxxxxxxxx.us-east-1.aws.endpoints.huggingface.cloud'

def load_image(path):
    try:
        with open(path, 'rb') as file:
            return file.read()
    except FileNotFoundError as error:
        print('Error reading image:', error)


def get_b64_image(path):
    image_buffer = load_image(path)
    if image_buffer:
        return base64.b64encode(image_buffer).decode('utf-8')


def process_images(original_image_path, mask_image_path, result_path, prompt, width, height):
    original_b64 = get_b64_image(original_image_path)
    mask_b64 = get_b64_image(mask_image_path)

    if not original_b64 or not mask_b64:
        return

    body = {
        'inputs': prompt,
        'image': original_b64,
        'mask_image': mask_b64,
        'width': width,
        'height': height
    }

    headers = {
        'Authorization': f'Bearer {HF_TOKEN}',
        'Content-Type': 'application/json',
        'Accept': 'image/png'
    }

    response = requests.post(
        API_ENDPOINT,
        json=body,
        headers=headers
    )
    blob = response.content

    save_image(blob, result_path)


def save_image(blob, file_path):
    with open(file_path, 'wb') as file:
        file.write(blob)
    print('File saved successfully!')


if __name__ == '__main__':
    original_image_path = 'images/original.png'
    mask_image_path = 'images/mask.png'
    result_path = 'images/result.png'
    process_images(original_image_path, mask_image_path, result_path, 'cyberpunk mona lisa', 512, 768)

Controlnet - v1.1 - InPaint Version

Controlnet v1.1 was released in lllyasviel/ControlNet-v1-1 by Lvmin Zhang.

This checkpoint is a conversion of the original checkpoint into diffusers format. It can be used in combination with Stable Diffusion, such as runwayml/stable-diffusion-v1-5.

For more details, please also have a look at the 🧨 Diffusers docs.

ControlNet is a neural network structure to control diffusion models by adding extra conditions.

img

This checkpoint corresponds to the ControlNet conditioned on inpaint images.

Model Details

Introduction

Controlnet was proposed in Adding Conditional Control to Text-to-Image Diffusion Models by Lvmin Zhang, Maneesh Agrawala.

The abstract reads as follows:

We present a neural network structure, ControlNet, to control pretrained large diffusion models to support additional input conditions. The ControlNet learns task-specific conditions in an end-to-end way, and the learning is robust even when the training dataset is small (< 50k). Moreover, training a ControlNet is as fast as fine-tuning a diffusion model, and the model can be trained on a personal devices. Alternatively, if powerful computation clusters are available, the model can scale to large amounts (millions to billions) of data. We report that large diffusion models like Stable Diffusion can be augmented with ControlNets to enable conditional inputs like edge maps, segmentation maps, keypoints, etc. This may enrich the methods to control large diffusion models and further facilitate related applications.

Example

It is recommended to use the checkpoint with Stable Diffusion v1-5 as the checkpoint has been trained on it. Experimentally, the checkpoint can be used with other diffusion models such as dreamboothed stable diffusion.

1. Let's install `diffusers` and related packages:

$ pip install diffusers transformers accelerate

2. Run code:
```python
import torch
import os
from diffusers.utils import load_image
from PIL import Image
import numpy as np
from diffusers import (
    ControlNetModel,
    StableDiffusionControlNetPipeline,
    UniPCMultistepScheduler,
)
checkpoint = "lllyasviel/control_v11p_sd15_inpaint"
original_image = load_image(
    "https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/original.png"
)
mask_image = load_image(
    "https://huggingface.co/lllyasviel/control_v11p_sd15_inpaint/resolve/main/images/mask.png"
)

def make_inpaint_condition(image, image_mask):
    image = np.array(image.convert("RGB")).astype(np.float32) / 255.0
    image_mask = np.array(image_mask.convert("L"))
    assert image.shape[0:1] == image_mask.shape[0:1], "image and image_mask must have the same image size"
    image[image_mask < 128] = -1.0 # set as masked pixel 
    image = np.expand_dims(image, 0).transpose(0, 3, 1, 2)
    image = torch.from_numpy(image)
    return image

control_image = make_inpaint_condition(original_image, mask_image)
prompt = "best quality"
negative_prompt="lowres, bad anatomy, bad hands, cropped, worst quality"
controlnet = ControlNetModel.from_pretrained(checkpoint, torch_dtype=torch.float16)
pipe = StableDiffusionControlNetPipeline.from_pretrained(
    "runwayml/stable-diffusion-v1-5", controlnet=controlnet, torch_dtype=torch.float16
)
pipe.scheduler = UniPCMultistepScheduler.from_config(pipe.scheduler.config)
pipe.enable_model_cpu_offload()
generator = torch.manual_seed(2)
image = pipe(prompt, negative_prompt=negative_prompt, num_inference_steps=30, 
             generator=generator, image=control_image).images[0]
image.save('images/output.png')

original mask inpaint_output

Other released checkpoints v1-1

The authors released 14 different checkpoints, each trained with Stable Diffusion v1-5 on a different type of conditioning:

Model Name Control Image Overview Condition Image Control Image Example Generated Image Example
lllyasviel/control_v11p_sd15_canny
Trained with canny edge detection A monochrome image with white edges on a black background.
lllyasviel/control_v11e_sd15_ip2p
Trained with pixel to pixel instruction No condition .
lllyasviel/control_v11p_sd15_inpaint
Trained with image inpainting No condition.
lllyasviel/control_v11p_sd15_mlsd
Trained with multi-level line segment detection An image with annotated line segments.
lllyasviel/control_v11f1p_sd15_depth
Trained with depth estimation An image with depth information, usually represented as a grayscale image.
lllyasviel/control_v11p_sd15_normalbae
Trained with surface normal estimation An image with surface normal information, usually represented as a color-coded image.
lllyasviel/control_v11p_sd15_seg
Trained with image segmentation An image with segmented regions, usually represented as a color-coded image.
lllyasviel/control_v11p_sd15_lineart
Trained with line art generation An image with line art, usually black lines on a white background.
lllyasviel/control_v11p_sd15s2_lineart_anime
Trained with anime line art generation An image with anime-style line art.
lllyasviel/control_v11p_sd15_openpose
Trained with human pose estimation An image with human poses, usually represented as a set of keypoints or skeletons.
lllyasviel/control_v11p_sd15_scribble
Trained with scribble-based image generation An image with scribbles, usually random or user-drawn strokes.
lllyasviel/control_v11p_sd15_softedge
Trained with soft edge image generation An image with soft edges, usually to create a more painterly or artistic effect.
lllyasviel/control_v11e_sd15_shuffle
Trained with image shuffling An image with shuffled patches or regions.
lllyasviel/control_v11f1e_sd15_tile
Trained with image tiling A blurry image or part of an image .

More information

For more information, please also have a look at the Diffusers ControlNet Blog Post and have a look at the official docs.

Downloads last month
27
Inference API
Inference API (serverless) does not yet support diffusers models for this pipeline type.

Model tree for OrderAndChaos/controlnet-inpaint-endpoint

Adapter
(2347)
this model