FlipSketch: Flipping Static Drawings to Text-Guided Sketch Animations
Abstract
Sketch animations offer a powerful medium for visual storytelling, from simple flip-book doodles to professional studio productions. While traditional animation requires teams of skilled artists to draw key frames and in-between frames, existing automation attempts still demand significant artistic effort through precise motion paths or keyframe specification. We present FlipSketch, a system that brings back the magic of flip-book animation -- just draw your idea and describe how you want it to move! Our approach harnesses motion priors from text-to-video diffusion models, adapting them to generate sketch animations through three key innovations: (i) fine-tuning for sketch-style frame generation, (ii) a reference frame mechanism that preserves visual integrity of input sketch through noise refinement, and (iii) a dual-attention composition that enables fluid motion without losing visual consistency. Unlike constrained vector animations, our raster frames support dynamic sketch transformations, capturing the expressive freedom of traditional animation. The result is an intuitive system that makes sketch animation as simple as doodling and describing, while maintaining the artistic essence of hand-drawn animation.
Community
We bring back the magic of flip books with digital animations of hand-drawn sketches.
https://hmrishavbandy.github.io/flipsketch-web/
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- ReCapture: Generative Video Camera Controls for User-Provided Videos using Masked Video Fine-Tuning (2024)
- MikuDance: Animating Character Art with Mixed Motion Dynamics (2024)
- Tex4D: Zero-shot 4D Scene Texturing with Video Diffusion Models (2024)
- Hallo2: Long-Duration and High-Resolution Audio-Driven Portrait Image Animation (2024)
- Shaping a Stabilized Video by Mitigating Unintended Changes for Concept-Augmented Video Editing (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper