|
--- |
|
license: other |
|
tags: |
|
- pytorch |
|
- diffusers |
|
- face image enhancement |
|
--- |
|
|
|
# DifFace: Blind Face Restoration with Diffused Error Contraction |
|
|
|
**Paper**: [DifFace: Blind Face Restoration with Diffused Error Contraction](https://arxiv.org/abs/2212.06512) |
|
|
|
**Authors**: Zongsheng Yue, Chen Change Loy |
|
|
|
**Abstract**: |
|
|
|
*While deep learning-based methods for blind face restoration have achieved unprecedented success, they still suffer from two major limitations. First, most of them deteriorate when facing complex degradations out of their training data. Second, these methods require multiple constraints, e.g., fidelity, perceptual, and adversarial losses, which require laborious hyper-parameter tuning to stabilize and balance their influences. In this work, we propose a novel method named DifFace that is capable of coping with unseen and complex degradations more gracefully without complicated loss designs. The key of our method is to establish a posterior distribution from the observed low-quality (LQ) image to its high-quality (HQ) counterpart. In particular, we design a transition distribution from the LQ image to the intermediate state of a pre-trained diffusion model and then gradually transmit from this intermediate state to the HQ target by recursively applying a pre-trained diffusion model. The transition distribution only relies on a restoration backbone that is trained with L2 loss on some synthetic data, which favorably avoids the cumbersome training process in existing methods. Moreover, the transition distribution can contract the error of the restoration backbone and thus makes our method more robust to unknown degradations. Comprehensive experiments show that DifFace is superior to current state-of-the-art methods, especially in cases with severe degradations.* |
|
|
|
## Inference |
|
|
|
```python |
|
# !pip install diffusers |
|
from diffusers import DifFacePipeline |
|
|
|
model_id = "OAOA/DifFace" |
|
|
|
# load model and scheduler |
|
pipe = DifFacePipeline.from_pretrained(model_id) |
|
pipe = pipe.to("cuda") |
|
|
|
im_lr = cv2.imread(im_path) # read the low quality face image |
|
|
|
im_sr = pipe(im_lr, num_inference_steps=250, started_steps=100, aligned=True)['images'][0] |
|
|
|
image[0].save("restorated_difface.png") # save the result |
|
``` |
|
|
|
<!--For more in-detail information, please have a look at the [official inference example](https://colab.research.google.com/github/huggingface/notebooks/blob/main/diffusers/diffusers_intro.ipynb)--> |
|
|
|
## Training |
|
|
|
If you want to train your own model, please have a look at the [official training example](https://github.com/zsyOAOA/DifFace). |
|
|
|
## Samples |
|
[<img src="assets/Solvay_conference.png" width="805px"/>](https://imgsli.com/MTM5NTgw) |
|
[<img src="assets/Hepburn.png" height="555px" width="400px"/>](https://imgsli.com/MTM5NTc5) [<img src="assets/oldimg_05.png" height="555px" width="400px"/>](https://imgsli.com/MTM5NTgy) |
|
|
|
<img src="cropped_faces/0368.png" height="200px" width="200px"/><img src="assets/0368.png" height="200px" width="200px"/> <img src="cropped_faces/0885.png" height="200px" width="200px"/><img src="assets/0885.png" height="200px" width="200px"/> |
|
|
|
<img src="cropped_faces/0729.png" height="200px" width="200px"/><img src="assets/0729.png" height="200px" width="200px"/> <img src="cropped_faces/0934.png" height="200px" width="200px"/><img src="assets/0934.png" height="200px" width="200px"/> |
|
|
|
|