DI-PCG: Diffusion-based Efficient Inverse Procedural Content Generation for High-quality 3D Asset Creation
Abstract
Procedural Content Generation (PCG) is powerful in creating high-quality 3D contents, yet controlling it to produce desired shapes is difficult and often requires extensive parameter tuning. Inverse Procedural Content Generation aims to automatically find the best parameters under the input condition. However, existing sampling-based and neural network-based methods still suffer from numerous sample iterations or limited controllability. In this work, we present DI-PCG, a novel and efficient method for Inverse PCG from general image conditions. At its core is a lightweight diffusion transformer model, where PCG parameters are directly treated as the denoising target and the observed images as conditions to control parameter generation. DI-PCG is efficient and effective. With only 7.6M network parameters and 30 GPU hours to train, it demonstrates superior performance in recovering parameters accurately, and generalizing well to in-the-wild images. Quantitative and qualitative experiment results validate the effectiveness of DI-PCG in inverse PCG and image-to-3D generation tasks. DI-PCG offers a promising approach for efficient inverse PCG and represents a valuable exploration step towards a 3D generation path that models how to construct a 3D asset using parametric models.
Community
DI-PCG is a diffusion model which directly generates a procedural generator's parameters from a single image, resulting in high-quality 3D meshes.
Project page: https://thuzhaowang.github.io/projects/DI-PCG/
Huggingface demo: https://huggingface.co/spaces/TencentARC/DI-PCG
Github code: https://github.com/TencentARC/DI-PCG
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- DSplats: 3D Generation by Denoising Splats-Based Multiview Diffusion Models (2024)
- Controllable Shadow Generation with Single-Step Diffusion Models from Synthetic Data (2024)
- Structured 3D Latents for Scalable and Versatile 3D Generation (2024)
- Boosting 3D object generation through PBR materials (2024)
- 3D MedDiffusion: A 3D Medical Diffusion Model for Controllable and High-quality Medical Image Generation (2024)
- TexGaussian: Generating High-quality PBR Material via Octree-based 3D Gaussian Splatting (2024)
- MaterialPicker: Multi-Modal Material Generation with Diffusion Transformers (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper