Papers
arxiv:2412.04146

AnyDressing: Customizable Multi-Garment Virtual Dressing via Latent Diffusion Models

Published on Dec 5
· Submitted by Crayon-Shinchan on Dec 6
Authors:
,
,

Abstract

Recent advances in garment-centric image generation from text and image prompts based on diffusion models are impressive. However, existing methods lack support for various combinations of attire, and struggle to preserve the garment details while maintaining faithfulness to the text prompts, limiting their performance across diverse scenarios. In this paper, we focus on a new task, i.e., Multi-Garment Virtual Dressing, and we propose a novel AnyDressing method for customizing characters conditioned on any combination of garments and any personalized text prompts. AnyDressing comprises two primary networks named GarmentsNet and DressingNet, which are respectively dedicated to extracting detailed clothing features and generating customized images. Specifically, we propose an efficient and scalable module called Garment-Specific Feature Extractor in GarmentsNet to individually encode garment textures in parallel. This design prevents garment confusion while ensuring network efficiency. Meanwhile, we design an adaptive Dressing-Attention mechanism and a novel Instance-Level Garment Localization Learning strategy in DressingNet to accurately inject multi-garment features into their corresponding regions. This approach efficiently integrates multi-garment texture cues into generated images and further enhances text-image consistency. Additionally, we introduce a Garment-Enhanced Texture Learning strategy to improve the fine-grained texture details of garments. Thanks to our well-craft design, AnyDressing can serve as a plug-in module to easily integrate with any community control extensions for diffusion models, improving the diversity and controllability of synthesized images. Extensive experiments show that AnyDressing achieves state-of-the-art results.

Community

Paper author Paper submitter

🔥Made Virtual Dressing Easy Now🔥

We present AnyDressing, a novel method for customizing characters conditioned on any combination of garments and any personalized text prompts.

Highlights:

  • We propose a novel GarmentsNet to efficiently capture multi-garment textures in parallel by employing a core Garment-Specific Feature Extractor"
  • We design a novel DressingNet incorporating a DressingAttention mechanism and an Instance-Level Garment Localization Learning strategy to accurately inject multigarment features into their corresponding regions.
  • We introduce a Garment-Enhanced Texture Learning strategy to effectively enhance the fine-grained texture details in synthetic images.
  • Our framework can seamlessly integrate with any community control plugins for diffusion models. Both quantitative and qualitative experimental results demonstrate the superiority of our AnyDressing.
  • [code and demo will be released🚀]
    Project page: https://crayon-shinchan.github.io/AnyDressing/
    Code: https://github.com/Crayon-Shinchan/AnyDressing
    Paper: https://arxiv.org/abs/2412.04146

    This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

    The following papers were recommended by the Semantic Scholar API

    Please give a thumbs up to this comment if you found it helpful!

    If you want recommendations for any Paper on Hugging Face checkout this Space

    You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

    Sign up or log in to comment

    Models citing this paper 0

    No model linking this paper

    Cite arxiv.org/abs/2412.04146 in a model README.md to link it from this page.

    Datasets citing this paper 0

    No dataset linking this paper

    Cite arxiv.org/abs/2412.04146 in a dataset README.md to link it from this page.

    Spaces citing this paper 0

    No Space linking this paper

    Cite arxiv.org/abs/2412.04146 in a Space README.md to link it from this page.

    Collections including this paper 5