Fashion photography is ubiquitous on online platforms, including social media and e-commerce websites. However, as static images, they can be limited in their ability to provide comprehensive information about a garment, particularly concerning how it fits and moves on a person’s body.
In contrast, fashion videos offer a more complete and immersive experience, showcasing the fabric’s texture, the way it drapes and flows, and other essential details that are difficult to capture through still photos.
Fashion videos can be an invaluable resource for consumers looking to make informed purchasing decisions. They offer a more in-depth look at the clothes in action, allowing shoppers better to assess their suitability for their needs and preferences. Despite these benefits, however, fashion videos remain relatively uncommon, and many brands and retailers still rely primarily on photography to showcase their products. As the demand for more engaging and informative content continues to grow, an increase in producing high-quality fashion videos across the industry is likely to happen.
A novel way to address these issues comes from Artificial Intelligence (AI). The name is DreamPose, and it represents a novel approach to transforming fashion photographs into lifelike, animated videos.
This method involves a diffusion video synthesis model built upon Stable Diffusion. By providing one or more images of a human and a corresponding pose sequence, DreamPose can generate a realistic and high-fidelity video of the subject in motion. The overview of its workflow is depicted below.
The task of generating high-quality, realistic videos from images poses several challenges. While image diffusion models have demonstrated impressive results in terms of quality and fidelity, the same cannot be said for video diffusion models. Such models are often limited to generating simple motion or cartoon-like visuals. Additionally, existing video diffusion models suffer from several issues, including poor temporal consistency, motion jitter, lack of realism, and limited control over motion in the target video. These limitations are partly due to the fact that existing models are mainly conditioned on text rather than other signals, such as motion, which may provide finer control.
In contrast, DreamPose leverages an image-and-pose conditioning scheme to achieve greater appearance fidelity and frame-to-frame consistency. This approach overcomes many of the shortcomings of existing video diffusion models. It furthermore enables the production of high-quality videos that accurately capture the motion and appearance of the input subject.
The model is fine-tuned from a pre-trained image diffusion model that is highly effective at modeling the distribution of natural images. Using such a model, the task of animating images can be simplified by identifying the subspace of natural images consistent with the conditioning signals. To achieve this, the Stable Diffusion architecture has been modified, specifically by redesigning the encoder and conditioning mechanisms to support aligned-image and unaligned-pose conditioning.
Moreover, it includes a two-stage fine-tuning process involving fine-tuning the UNet and VAE components using one or more input images. This approach optimizes the model for generating realistic, high-quality videos that accurately capture the appearance and motion of the input subject.
Some examples of the produced results reported by the authors of this work are illustrated in the figure below. Furthermore, this figure includes a comparison between DreamPose and state-of-the-art techniques.
This was the summary of DreamPose, a novel AI framework to synthesize photorealistic fashion videos from a single input image. If you are interested, you can learn more about this technique in the links below.
Check out the Research Paper, Code, and Project. Don’t forget to join our 20k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
🚀 Check Out 100’s AI Tools in AI Tools Club
Daniele Lorenzi received his M.Sc. in ICT for Internet and Multimedia Engineering in 2021 from the University of Padua, Italy. He is a Ph.D. candidate at the Institute of Information Technology (ITEC) at the Alpen-Adria-Universität (AAU) Klagenfurt. He is currently working in the Christian Doppler Laboratory ATHENA and his research interests include adaptive video streaming, immersive media, machine learning, and QoS/QoE evaluation.
Credit: Source link
Comments are closed.