Affordance-Aware Object Insertion via Mask-Aware Dual Diffusion
Abstract
As a common image editing operation, image composition involves integrating foreground objects into background scenes. In this paper, we expand the application of the concept of Affordance from human-centered image composition tasks to a more general object-scene composition framework, addressing the complex interplay between foreground objects and background scenes. Following the principle of Affordance, we define the affordance-aware object insertion task, which aims to seamlessly insert any object into any scene with various position prompts. To address the limited data issue and incorporate this task, we constructed the SAM-FB dataset, which contains over 3 million examples across more than 3,000 object categories. Furthermore, we propose the Mask-Aware Dual Diffusion (MADD) model, which utilizes a dual-stream architecture to simultaneously denoise the RGB image and the insertion mask. By explicitly modeling the insertion mask in the diffusion process, MADD effectively facilitates the notion of affordance. Extensive experimental results show that our method outperforms the state-of-the-art methods and exhibits strong generalization performance on in-the-wild images. Please refer to our code on https://github.com/KaKituken/affordance-aware-any.
Community
Project link: https://kakituken.github.io/affordance-any.github.io/
GitHub code: https://github.com/KaKituken/affordance-aware-any
Paper: https://arxiv.org/abs/2412.14462
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Coherent 3D Scene Diffusion From a Single RGB Image (2024)
- DreamMix: Decoupling Object Attributes for Enhanced Editability in Customized Image Inpainting (2024)
- ObjectMate: A Recurrence Prior for Object Insertion and Subject-Driven Generation (2024)
- MureObjectStitch: Multi-reference Image Composition (2024)
- Add-it: Training-Free Object Insertion in Images With Pretrained Diffusion Models (2024)
- MOVIS: Enhancing Multi-Object Novel View Synthesis for Indoor Scenes (2024)
- SSEditor: Controllable Mask-to-Scene Generation with Diffusion Model (2024)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper