当前位置:   article > 正文

【虚拟换衣】OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on

outfitting fusion

看着比较有意思的工作

虚拟服装试穿

OOTDiffusion: Outfitting Fusion based Latent Diffusion for Controllable Virtual Try-on
用于可控虚拟试穿 基于潜在扩散的服装合成

论文地址
代码地址

【虚拟换衣+论文+代码】2403.OOTDiffusion:高分辨率(1024x768)可控的虚拟试穿(已开源,暂不能训练)

请添加图片描述
请添加图片描述

Abstract

We present OOTDiffusion, a novel network architecture for realistic and controllable image-based virtual try-on (VTON). We leverage the power of pretrained latent diffusion models, designing an outfitting UNet to learn the garment detail features. Without a redundant warping process, the garment features are precisely aligned with the target human body via the proposed outfitting fusion in the self-attention layers of the denoising UNet. In order to further enhance the controllability, we introduce outfitting dropout to the training process, which enables us to adjust the strength of the garment features through classifier-free guidance. Our comprehensive experiments on the VITON-HD and Dress Code datasets demonstrate that OOTDiffusion efficiently generates high-quality try-on results for arbitrary human and garment images, which outperforms other VTON methods in both realism and controllability, indicating an impressive breakthrough in virtual try-on. Our source code is available at https://github.com/levihsu/OOTDiffusion.

我们介绍的 OOTDiffusion 是一种新颖的网络架构,可用于逼真、可控的基于图像的虚拟试穿(VTON)

我们利用预先训练的潜在扩散模型的力量,设计了一个服装 UNet 来学习服装细节特征。

  1. 在去噪 UNet 的自注意层中,服装特征通过拟议的服装融合与目标人体精确对齐,而无需冗余的扭曲过程。

  2. 为了进一步增强可控性,我们在训练过程中引入了outfit dropout,这使我们能够通过无分类器引导来调整服装特征的强度。

我们在 VITON-HD 和 Dress Code 数据集上进行的综合实验证明,OOTDiffusion 能有效地生成任意人体和服装图像的高质量试穿结果,在逼真度和可控性方面都优于其他 VTON 方法,这表明我们在虚拟试穿方面取得了令人瞩目的突破。

声明:本文内容由网友自发贡献,不代表【wpsshop博客】立场,版权归原作者所有,本站不承担相应法律责任。如您发现有侵权的内容,请联系我们。转载请注明出处:https://www.wpsshop.cn/w/小丑西瓜9/article/detail/522735
推荐阅读
相关标签
  

闽ICP备14008679号