Post
1965
Another gem from our lab — DGInStyle! We use Stable Diffusion to generate semantic segmentation data for autonomous driving and train domain-generalizable networks.
📟 Website: https://dginstyle.github.io
🧾 Paper: https://arxiv.org/abs/2312.03048
🤗 Hugging Face Paper: DGInStyle: Domain-Generalizable Semantic Segmentation with Image Diffusion Models and Stylized Semantic Control (2312.03048)
🤗 Hugging Face Model: yurujaja/DGInStyle
🐙 Code: https://github.com/yurujaja/DGInStyle
In a nutshell, our pipeline overcomes the resolution loss of Stable Diffusion latent space and the style bias of ControlNet, as shown in the attached figures. This allows us to generate sufficiently high-quality pairs of images and semantic masks to train domain-generalizable semantic segmentation networks.
Team: Yuru Jia ( @yurujaja ), Lukas Hoyer, Shengyu Huang, Tianfu Wang ( @Tianfwang ), Luc Van Gool, Konrad Schindler, and Anton Obukhov ( @toshas ).
📟 Website: https://dginstyle.github.io
🧾 Paper: https://arxiv.org/abs/2312.03048
🤗 Hugging Face Paper: DGInStyle: Domain-Generalizable Semantic Segmentation with Image Diffusion Models and Stylized Semantic Control (2312.03048)
🤗 Hugging Face Model: yurujaja/DGInStyle
🐙 Code: https://github.com/yurujaja/DGInStyle
In a nutshell, our pipeline overcomes the resolution loss of Stable Diffusion latent space and the style bias of ControlNet, as shown in the attached figures. This allows us to generate sufficiently high-quality pairs of images and semantic masks to train domain-generalizable semantic segmentation networks.
Team: Yuru Jia ( @yurujaja ), Lukas Hoyer, Shengyu Huang, Tianfu Wang ( @Tianfwang ), Luc Van Gool, Konrad Schindler, and Anton Obukhov ( @toshas ).