Video try-on stands as a promising area for its tremendous real-world potential. Previous research on video try-on has primarily focused on transferring product clothing images to videos with simple human poses, while performing poorly with complex movements. To better preserve clothing details, those approaches are armed with an additional garment encoder, resulting in higher computational resource consumption. The primary challenges in this domain are twofold: (1) leveraging the garment encoder's capabilities in video try-on while lowering computational requirements; (2) ensuring temporal consistency in the synthesis of human body parts, especially during rapid movements. To tackle these issues, we propose a novel video try-on framework based on Diffusion Transformer(DiT), named Dynamic Try-On. To reduce computational overhead, we adopt a straightforward approach by utilizing the DiT backbone itself as the garment encoder and employing a dynamic feature fusion module to store and integrate garment features. To ensure temporal consistency of human body parts, we introduce a limb-aware dynamic attention module that enforces the DiT backbone to focus on the regions of human limbs during the denoising process. Extensive experiments demonstrate the superiority of Dynamic Try-On in generating stable and smooth try-on results, even for videos featuring complicated human postures.
Overview of the proposed Dynamic Try-On. The illustration shows three components with the following tasks. (1) Denoising DiT : generating latent representation of video contents and extracting garment features via a chain of ST-DiT blocks. (2) ID Encoder : producing feature residual for the Denoising DiT to preserve the reference person's identity, pose, and background. (3) Dynamic Feature Fusion Module : storing and delivering garment features into the Denoising DiT, thus recovering detailed clothing textures in the generated try-on video.
@article{zheng2024dynamictryontamingvideo, title={Dynamic Try-On: Taming Video Virtual Try-on with Dynamic Attention Mechanism}, author={Zheng, Jun and Wang, Jing and Zhao, Fuwei and Zhang, Xujie and Liang, Xiaodan}, journal={arXiv preprint}, year={2024} }