Bytedance Announces DiffPortrait3D: A Novel Zero-Shot View Synthesis AI Method that Extends 2D Stable Diffusion for Generating 3d Consistent Novel Views Given as Little as a Single Portrait

Large Language Models (LLMs) have recently taken over the Artificial Intelligence (AI) community, all thanks to their marvelous capabilities and performance. These models have shown remarkable applications in almost every industry based on the power of sub-fields of AI, including Natural Language Processing, Natural Language Generation, and Computer Vision. Though computer vision and especially diffusion models have gained significant attention, producing high-fidelity, coherent new perspectives with limited input is still a challenge.

To address the challenge, in recent research, a team of researchers from ByteDance has introduced DiffPortrait3D, a unique conditional diffusion model that has been designed to create photo-realistic, 3D-consistent views from a single in-the-wild portrait. DiffPortrait3D can rebuild a single two-dimensional (2D) unconstrained portrait into a three-dimensional (3D) representation of a human face.

The model preserves the subject’s identity and expressions while producing realistic facial details from new camera angles. This approach’s primary innovation is its zero-shot capability, which allows it to generalize to a wide range of face portraits, including those with unposed camera views, extreme facial expressions, and a variety of artistic styles, without the need for time-consuming optimization or fine-tuning procedures.

The fundamental component of DiffPortrait3D is the generative prior from 2D diffusion models that have been pre-trained on large picture datasets and which acts as the model’s rendering framework. A disentangled attentive control mechanism that controls appearance and camera posture facilitates denoising. The appearance context from a reference image is injected into the frozen UNets’ self-attention layers, where these UNets are an essential part of the dissemination mechanism.

DiffPortrait3D uses a special conditional control module to change the rendering view. This module analyses a condition image of a subject shot from the same angle in order to interpret the camera attitude. This allows the model to combine consistent facial features from different angles of view. 

To further improve visual consistency, a trainable cross-view attention module has also been presented. In situations when severe facial expressions or unposed camera perspectives could otherwise provide difficulties, this module becomes especially helpful.

A unique 3D-aware noise-generating mechanism has also been included to guarantee resilience during inference. This stage adds to the synthesized pictures’ overall stability and realism. The team has evaluated and accessed the performance of DiffPortrait3D on demanding multi-view and in-the-wild benchmarks, exhibiting both qualitatively and numerically state-of-the-art results. The approach has demonstrated its efficacy in tackling the challenges of single-image 3D portrait synthesis by producing realistic and high-quality facial reconstructions under a variety of artistic styles and settings.

The team has shared their primary contributions as follows.

  1. A unique zero-shot method for creating 3D-consistent novel views from a single portrait by extending 2D Stable Diffusion has been introduced.
  1. The approach has demonstrated impressive achievements in unique view synthesis, supporting a variety of portraits in terms of appearance, expression, attitude, and style without requiring laborious fine-tuning.
  1. It uses a clearly separated control system for appearance and camera view, enabling efficient camera manipulation without compromising the subject’s expression or identity.
  1. The approach combines a cross-view attention module with a 3D-aware noise creation technique to provide long-range consistency in 3D views.

Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.


🚀 Boost your LinkedIn presence with Taplio: AI-driven content creation, easy scheduling, in-depth analytics, and networking with top creators – Try it free now!.

Credit: Source link

Comments are closed.