DifFace / README.md
Zongsheng
add model card
6905f98
metadata
license: other
tags:
  - pytorch
  - diffusers
  - face image enhancement

DifFace: Blind Face Restoration with Diffused Error Contraction

Paper: DifFace: Blind Face Restoration with Diffused Error Contraction

Authors: Zongsheng Yue, Chen Change Loy

Abstract:

While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with L2 loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations.

Inference

# !pip install diffusers
from diffusers import DifFacePipeline

model_id = "OAOA/DifFace"

# load model and scheduler
pipe = DifFacePipeline.from_pretrained(model_id)  
pipe = pipe.to("cuda")

im_lr = cv2.imread(im_path)   # read the low quality face image

im_sr = pipe(im_lr, num_inference_steps=250, started_steps=100, aligned=True)['images'][0]

image[0].save("restorated_difface.png") # save the result

Training

If you want to train your own model, please have a look at the official training example.

Samples