franciszzj commited on
Commit
f971f0c
1 Parent(s): 68f6086

update readme

Browse files
Files changed (1) hide show
  1. README.md +68 -2
README.md CHANGED
@@ -1,6 +1,6 @@
1
  ---
2
  title: Leffa
3
- emoji: 🤗
4
  colorFrom: indigo
5
  colorTo: blue
6
  sdk: gradio
@@ -10,4 +10,70 @@ pinned: false
10
  license: mit
11
  ---
12
 
13
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  title: Leffa
3
+ emoji: 👗🤗🧜
4
  colorFrom: indigo
5
  colorTo: blue
6
  sdk: gradio
 
10
  license: mit
11
  ---
12
 
13
+ # *Leffa*: Learning Flow Fields in Attention for Controllable Person Image Generation
14
+
15
+ [📚 Paper](https://arxiv.org/abs/2412.08486) - [🤖 Code](https://github.com/franciszzj/Leffa) - [🔥 Demo](https://huggingface.co/spaces/franciszzj/Leffa) - [🤗 Model](https://huggingface.co/franciszzj/Leffa)
16
+
17
+
18
+ ## News
19
+ - 16/Dec/2024, the virtual try-on [model](https://huggingface.co/franciszzj/Leffa/blob/main/virtual_tryon_dc.pth) trained on DressCode is released.
20
+ - 12/Dec/2024, the HuggingFace [demo](https://huggingface.co/spaces/franciszzj/Leffa) and [models](https://huggingface.co/franciszzj/Leffa) (virtual try-on model trained on VITON-HD and pose transfer model trained on DeepFashion) are released.
21
+ - 11/Dec/2024, the [arXiv](https://arxiv.org/abs/2412.08486) version of the paper is released.
22
+
23
+
24
+ *[Leffa](https://en.wiktionary.org/wiki/leffa)* is a unified framework for controllable person image generation that enables precise manipulation of both appearance (i.e., virtual try-on) and pose (i.e., pose transfer).
25
+
26
+ <div align="center">
27
+ <img src="https://huggingface.co/franciszzj/Leffa/resolve/main/assets/teaser.png" width="100%" height="100%"/>
28
+ </div>
29
+
30
+ ## Abstract
31
+ Controllable person image generation aims to generate a person image conditioned on reference images, allowing precise control over the person’s appearance or pose. However, prior methods often distort fine-grained textural details from the reference image, despite achieving high overall image quality. We attribute these distortions to inadequate attention to corresponding regions in the reference image. To address this, we thereby propose **le**arning **f**low **f**ields in **a**ttention (***Leffa***), which explicitly guides the target query to attend to the correct reference key in the attention layer during training. Specifically, it is realized via a regularization loss on top of the attention map within a diffusion-based baseline. Our extensive experiments show that *Leffa* achieves state-of-the-art performance in controlling appearance (virtual try-on) and pose (pose transfer), significantly reducing fine-grained detail distortion while maintaining high image quality. Additionally, we show that our loss is model-agnostic and can be used to improve the performance of other diffusion models.
32
+
33
+ ## Method
34
+ An overview of our *Leffa* training pipeline for controllable person image generation. The left is our diffusion-based baseline; the right is our *Leffa* loss. Note that Isrc and Itgt are the same image during training.
35
+
36
+ <div align="center">
37
+ <img src="https://huggingface.co/franciszzj/Leffa/resolve/main/assets/leffa.png" width="100%" height="100%"/>
38
+ </div>
39
+
40
+ ## Visualization
41
+ Qualitative visual results comparison with other methods. The input person image for the pose transfer is generated using our method in the virtual try-on. The visualization results demonstrate that our method not only generates high-quality images but also greatly reduces the distortion of fine-grained details.
42
+
43
+ <div align="center">
44
+ <img src="https://huggingface.co/franciszzj/Leffa/resolve/main/assets/vis_result.png" width="100%" height="100%"/>
45
+ </div>
46
+
47
+ ## Installation
48
+ Create a conda environment and install requirements:
49
+ ```shell
50
+ conda create -n leffa python==3.10
51
+ conda activate leffa
52
+ cd Leffa
53
+ pip install -r requirements.txt
54
+ ```
55
+
56
+ ## Gradio App
57
+ Run locally:
58
+ ```shell
59
+ python app.py
60
+ ```
61
+
62
+ ## Evaluation
63
+ We use this [code](https://github.com/franciszzj/VtonEval) for metric evaluation.
64
+
65
+ ## Acknowledgement
66
+ Our code is based on [Diffusers](https://github.com/huggingface/diffusers) and [Transformers](https://github.com/huggingface/transformers).
67
+ We use [SCHP](https://github.com/GoGoDuck912/Self-Correction-Human-Parsing/tree/master) and [DensePose](https://github.com/facebookresearch/DensePose) to generate masks and densepose in our [Demo](https://huggingface.co/spaces/franciszzj/Leffa).
68
+ We also referred to the code of [IDM-VTON](https://github.com/yisol/IDM-VTON) and [CatVTON](https://github.com/Zheng-Chong/CatVTON).
69
+
70
+ ## Citation
71
+ If you find our work helpful or inspiring, please feel free to cite it.
72
+ ```
73
+ @article{zhou2024learning,
74
+ title={Learning Flow Fields in Attention for Controllable Person Image Generation},
75
+ author={Zhou, Zijian and Liu, Shikun and Han, Xiao and Liu, Haozhe and Ng, Kam Woh and Xie, Tian and Cong, Yuren and Li, Hang and Xu, Mengmeng and Pérez-Rúa, Juan-Manuel and Patel, Aditya and Xiang, Tao and Shi, Miaojing and He, Sen},
76
+ journal={arXiv preprint arXiv:2412.08486},
77
+ year={2024},
78
+ }
79
+ ```