Researchers from the University of UT Austin Introduce PSLD: An AI Method that Uses Stable Diffusion to Solve All Linear Problems Without Any Extra Training
For solving inverse problems, there are two categories of approaches: supervised techniques, where a restoration model is trained to complete the task, and unsupervised methods, where a generative model uses the prior it has learned to direct the restoration process.
A significant advancement in generative modeling is the emergence of diffusion models. As a result of diffusion models’ apparent efficacy, researchers have begun exploring their potential for resolving inverse problems. Due to the difficulty in addressing (linear and non-linear) inverse issues using diffusion models, a number of approximation algorithms have been developed. To efficiently address issues like inpainting, deblurring, and superresolution, these techniques use pretrained diffusion models as flexible priors for the data distribution.
State-of-the-art foundation models, such as Stable Diffusion, are powered by Latent Diffusion Models (LDMs). These models have enabled various applications across various data modalities, such as pictures, videos, audio, and medical domain distributions (MRI and proteins). However, none of the current inverse problem-solving algorithms are compatible with Latent Diffusion Models. For an inverse problem, fine-tuning must be performed for each task of interest to employ a base model, such as Stable Diffusion.
Recent research by the University of Texas at Austin team proposes the first framework for using pre-trained latent diffusion models to address generic inverse problems. An additional gradient update step directs the diffusion process toward sample latents for which the decoding-encoding map is not lossy; this is their core notion for extending DPS. Their algorithm called Posterior Sampling with Latent Diffusion (PSLD), beat prior approaches without fine-tuning by using the power of accessible foundation models for a wide variety of issues.
The researchers evaluate the PSLD approach against the state-of-the-art DPS algorithm on a variety of image restoration and enhancement tasks, such as random inpainting, box inpainting, denoising, Gaussian deblur, motion deblur, arbitrary masking, and superresolution. The team used Stable Diffusion trained with the LAION data set for their analysis. The results showed state-of-the-art performance.
The researchers also noticed that the algorithm would be unwittingly influenced by the inherent biases of this dataset and its underlying model. The proposed technique is compatible with any LDM. The team believes that these problems will be resolved by new foundation models trained on improved datasets. They also highlight that the application of latent-based foundation models for resolving non-linear inverse problems has not been investigated. They hope this will be generalized since the approach is based on the DPS approximation.
Check out the Paper, Demo, and GitHub link. Don’t forget to join our 26k+ ML SubReddit, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com
🚀 Check Out 800+ AI Tools in AI Tools Club
Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.
Credit: Source link
Comments are closed.