Update README.md
Browse files
README.md
CHANGED
@@ -1,3 +1,111 @@
|
|
1 |
-
---
|
2 |
-
license:
|
3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
+
---
|
2 |
+
license: creativeml-openrail-m
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
---
|
6 |
+
# MusePose
|
7 |
+
|
8 |
+
MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation. Zhengyan Tong,
|
9 |
+
Chao Li,
|
10 |
+
Zhaokang Chen,
|
11 |
+
Bin Wu<sup>†</sup>,
|
12 |
+
Wenjiang Zhou
|
13 |
+
(<sup>†</sup>Corresponding Author, benbinwu@tencent.com)
|
14 |
+
|
15 |
+
**[github](https://github.com/TMElyralab/MusePose)** **[huggingface](https://huggingface.co/TMElyralab/MusePose)** **space (comming soon)** **Project (comming soon)** **Technical report (comming soon)**
|
16 |
+
|
17 |
+
[MusePose](https://github.com/TMElyralab/MusePose) is an image-to-video generation framework for virtual human under control signal such as pose.
|
18 |
+
|
19 |
+
`MusePose` is the last building block of **the Muse opensource serie**. Together with [MuseV](https://github.com/TMElyralab/MuseV) and [MuseTalk](https://github.com/TMElyralab/MuseTalk), we hope the community can join us and march towards the vision where a virtual human can be generated end2end with native ability of full body movement and interaction.
|
20 |
+
|
21 |
+
We really appreciate [AnimateAnyone](https://github.com/HumanAIGC/AnimateAnyone) for their academic paper and [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) for their code base, which have significantly expedited the development of the AIGC community and [MusePose](https://github.com/TMElyralab/MusePose).
|
22 |
+
|
23 |
+
## Overview
|
24 |
+
[MusePose](https://github.com/TMElyralab/MusePose) is a diffusion-based and pose-guided virtual human video generation framework.
|
25 |
+
Our main contributions could be summarized as follows:
|
26 |
+
1. The released model can generate dance videos of the human character in a reference image under the given pose sequence. The result quality exceeds almost all current open source models within the same topic.
|
27 |
+
2. We release the `pose align` algorithm so that users could align arbitrary dance videos to arbitrary reference images, which **SIGNIFICANTLY** improved inference performance and enhanced model usability.
|
28 |
+
3. We have fixed several important bugs and made some improvement based on the code of [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone).
|
29 |
+
|
30 |
+
## Demos
|
31 |
+
<table class="center">
|
32 |
+
|
33 |
+
<tr>
|
34 |
+
<td width=50% style="border: none">
|
35 |
+
<video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/bb52ca3e-8a5c-405a-8575-7ab42abca248" muted="false"></video>
|
36 |
+
</td>
|
37 |
+
<td width=50% style="border: none">
|
38 |
+
<video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/6667c9ae-8417-49a1-bbbb-fe1695404c23" muted="false"></video>
|
39 |
+
</td>
|
40 |
+
</tr>
|
41 |
+
|
42 |
+
<tr>
|
43 |
+
<td width=50% style="border: none">
|
44 |
+
<video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/7f7a3aaf-2720-4b50-8bca-3257acce4733" muted="false"></video>
|
45 |
+
</td>
|
46 |
+
<td width=50% style="border: none">
|
47 |
+
<video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/c56f7e9c-d94d-494e-88e6-62a4a3c1e016" muted="false"></video>
|
48 |
+
</td>
|
49 |
+
</tr>
|
50 |
+
|
51 |
+
|
52 |
+
<tr>
|
53 |
+
<td width=50% style="border: none">
|
54 |
+
<video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/00a9faec-2453-4834-ad1f-44eb0ec8247d" muted="false"></video>
|
55 |
+
</td>
|
56 |
+
<td width=50% style="border: none">
|
57 |
+
<video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/41ad26b3-d477-4975-bf29-73a3c9ed0380" muted="false"></video>
|
58 |
+
</td>
|
59 |
+
</tr>
|
60 |
+
|
61 |
+
<tr>
|
62 |
+
<td width=50% style="border: none">
|
63 |
+
<video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/2bbebf98-6805-4f1b-b769-537f69cc0e4b" muted="false"></video>
|
64 |
+
</td>
|
65 |
+
<td width=50% style="border: none">
|
66 |
+
<video controls autoplay loop src="https://github.com/TMElyralab/MusePose/assets/47803475/1b2b97d0-0ae9-49a6-83ba-b3024ae64f08" muted="false"></video>
|
67 |
+
</td>
|
68 |
+
</tr>
|
69 |
+
|
70 |
+
</table>
|
71 |
+
|
72 |
+
|
73 |
+
## News
|
74 |
+
- [05/27/2024] Release `MusePose` and pretrained models.
|
75 |
+
|
76 |
+
|
77 |
+
## Todo:
|
78 |
+
- [x] release our trained models and inference codes of MusePose-v1.
|
79 |
+
- [x] release pose align algorithm.
|
80 |
+
- [ ] training guidelines.
|
81 |
+
- [ ] Huggingface Gradio demo.
|
82 |
+
- [ ] a improved architecture and model (may take longer).
|
83 |
+
|
84 |
+
|
85 |
+
|
86 |
+
|
87 |
+
# Acknowledgement
|
88 |
+
1. We thank [AnimateAnyone](https://github.com/HumanAIGC/AnimateAnyone) for their technical report, and have refer much to [Moore-AnimateAnyone](https://github.com/MooreThreads/Moore-AnimateAnyone) and [diffusers](https://github.com/huggingface/diffusers).
|
89 |
+
1. We thank open-source components like [AnimateDiff](https://animatediff.github.io/), [dwpose](https://github.com/IDEA-Research/DWPose), [Stable Diffusion](https://github.com/CompVis/stable-diffusion), etc..
|
90 |
+
|
91 |
+
Thanks for open-sourcing!
|
92 |
+
|
93 |
+
# Limitations
|
94 |
+
- Detail consitency: some details of the original character are not well preserved (e.g. face region and complex clothing).
|
95 |
+
- Noise and flickering: we observe noise and flicking in complex background.
|
96 |
+
|
97 |
+
# Citation
|
98 |
+
```bib
|
99 |
+
@article{musepose,
|
100 |
+
title={MusePose: a Pose-Driven Image-to-Video Framework for Virtual Human Generation},
|
101 |
+
author={Tong, Zhengyan and Li, Chao and Chen, Zhaokang and Wu, Bin and Zhou, Wenjiang},
|
102 |
+
journal={arxiv},
|
103 |
+
year={2024}
|
104 |
+
}
|
105 |
+
```
|
106 |
+
# Disclaimer/License
|
107 |
+
1. `code`: The code of MusePose is released under the MIT License. There is no limitation for both academic and commercial usage.
|
108 |
+
1. `model`: The trained model are available for non-commercial research purposes only.
|
109 |
+
1. `other opensource model`: Other open-source models used must comply with their license, such as `ft-mse-vae`, `dwpose`, etc..
|
110 |
+
1. The testdata are collected from internet, which are available for non-commercial research purposes only.
|
111 |
+
1. `AIGC`: This project strives to impact the domain of AI-driven video generation positively. Users are granted the freedom to create videos using this tool, but they are expected to comply with local laws and utilize it responsibly. The developers do not assume any responsibility for potential misuse by users.
|