Google AI Researchers Propose a Robust Machine Learning-Based System to Address Motion Blur on Faces in Portrait Photography on Mobile Phones

The blurring of fast-moving objects in images is a problem that has been persistent from the time modern cameras were invented. Although image deblurring software has been available for some time, they need a high amount of computational power. Hence, to solve the problem, researchers at Google have created an A.I. to specifically deblur the images on mobile phones across diverse lighting and motion.

One of the best features of the A.I. is the instant result you will get. Just click on the shutter of the camera, and the A.I. automatically detects and deblur the image immediately after the photo is taken; along with the fact that no additional user inputs are required as the deblurring is done automatically, the A.I. can handle any range of facial motions and can deblur up to hundreds of pixels in a photograph. Now, as we go to the workings of the single image motion blurring, the researchers used the reference-based approach that achieves high-quality image deblurring on phones. In multi-frame and video deblurring, the researchers captured W and U.W. (W is the wide camera angle and U.W. is the secondary ultrawide camera) in a mobile phone) bursts of about 7-9 frames, use the robust local alignment, merge raw frames, and avoid temporal alignment and high memory cost. To align the image for the fusion, the PCW-Net is used to estimate the optical flow. In fusion deblurring, since the width of the U.W. camera is half of the W camera. Instead of resizing all input images to the exact resolution, we resize the reference image, face mask, and occlusion mask to 1/2 × the size of the source to save the memory usage. 

Although the A.I. works well in modern times, there are some limitations related to the A.I. where it must improve to solve the problem of blurry images, one of them being low light images. In extreme low lights, the captured image quality highly affects the quality of debugging that can be done. Another one has a small face. A small face makes it difficult to capture the facial details adequately, hindering the A.I.’s mechanics from deblurring the image. Only applicable for a single look which is something that is limited due to the present technology.

In conclusion, the Face deblurring A.I. works exceptionally well compared with its counterparts in terms of speed, computational power, and results. However, still, some limitations make the deblurred A.I. have something more to be wished for which in all fairness are problems that can be solved through the update in technology and hence we can be sure that a better and faster version of this A.I. will surely be available in future and also a sought after asset as every big company would like to have or build their own A.I. using this as an inspiration.    

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Face Deblurring using Dual Camera Fusion on Mobile Phones'. All Credit For This Research Goes To Researchers on This Project. Check out the paper.

Please Don't Forget To Join Our ML Subreddit


Asif Razzaq is an AI Journalist and Cofounder of Marktechpost, LLC. He is a visionary, entrepreneur and engineer who aspires to use the power of Artificial Intelligence for good.

Asif’s latest venture is the development of an Artificial Intelligence Media Platform (Marktechpost) that will revolutionize how people can find relevant news related to Artificial Intelligence, Data Science and Machine Learning.

Asif was featured by Onalytica in it’s ‘Who’s Who in AI? (Influential Voices & Brands)’ as one of the ‘Influential Journalists in AI’ (https://onalytica.com/wp-content/uploads/2021/09/Whos-Who-In-AI.pdf). His interview was also featured by Onalytica (https://onalytica.com/blog/posts/interview-with-asif-razzaq/).


Credit: Source link

Comments are closed.