Trans4D: Realistic Geometry-Aware Transition for Compositional Text-to-4D Synthesis
Abstract
Recent advances in diffusion models have demonstrated exceptional capabilities in image and video generation, further improving the effectiveness of 4D synthesis. Existing 4D generation methods can generate high-quality 4D objects or scenes based on user-friendly conditions, benefiting the gaming and video industries. However, these methods struggle to synthesize significant object deformation of complex 4D transitions and interactions within scenes. To address this challenge, we propose Trans4D, a novel text-to-4D synthesis framework that enables realistic complex scene transitions. Specifically, we first use multi-modal large language models (MLLMs) to produce a physic-aware scene description for 4D scene initialization and effective transition timing planning. Then we propose a geometry-aware 4D transition network to realize a complex scene-level 4D transition based on the plan, which involves expressive geometrical object deformation. Extensive experiments demonstrate that Trans4D consistently outperforms existing state-of-the-art methods in generating 4D scenes with accurate and high-quality transitions, validating its effectiveness. Code: https://github.com/YangLing0818/Trans4D
Community
This paper proposes a new text-to-4D-scene generation method Trans4D, which realizes high-quality 4D scene synthesis with rational transition.
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- SceneDreamer360: Text-Driven 3D-Consistent Scene Generation with Panoramic Gaussian Splatting (2024)
- Compositional 3D-aware Video Generation with LLM Director (2024)
- MVGaussian: High-Fidelity text-to-3D Content Generation with Multi-View Guidance and Surface Densification (2024)
- Flex3D: Feed-Forward 3D Generation With Flexible Reconstruction Model And Input View Curation (2024)
- ReconX: Reconstruct Any Scene from Sparse Views with Video Diffusion Model (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper