Papers
arxiv:2307.04725

AnimateDiff: Animate Your Personalized Text-to-Image Diffusion Models without Specific Tuning

Published on Jul 10, 2023
· Submitted by akhaliq on Jul 11, 2023
#1 Paper of the day
Authors:
,
,
,

Abstract

With the advance of text-to-image models (e.g., Stable Diffusion) and corresponding personalization techniques such as DreamBooth and LoRA, everyone can manifest their imagination into high-quality images at an affordable cost. Subsequently, there is a great demand for image animation techniques to further combine generated static images with motion dynamics. In this report, we propose a practical framework to animate most of the existing personalized text-to-image models once and for all, saving efforts in model-specific tuning. At the core of the proposed framework is to insert a newly initialized motion modeling module into the frozen text-to-image model and train it on video clips to distill reasonable motion priors. Once trained, by simply injecting this motion modeling module, all personalized versions derived from the same base T2I readily become text-driven models that produce diverse and personalized animated images. We conduct our evaluation on several public representative personalized text-to-image models across anime pictures and realistic photographs, and demonstrate that our proposed framework helps these models generate temporally smooth animation clips while preserving the domain and diversity of their outputs. Code and pre-trained weights will be publicly available at https://animatediff.github.io/ .

Community

FieZKha0-1380x1035.jpeg

browing in the wind

red dog

browing in the wind

red dog

water

This comment has been hidden

ge7po6f1iejq3cbkx73z.jpg

Sign up or log in to comment

Models citing this paper 3

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2307.04725 in a dataset README.md to link it from this page.

Spaces citing this paper 14

Collections including this paper 8