Researchers at Stanford have developed an Artificial Intelligence (AI) model, EG3D, that can generate random images of faces and other objects with high resolution together with underlying geometric structures

Source: https://github.com/NVlabs/eg3d

Artificially intelligent models have recently advanced to the point that users will soon be able to utilize these models to immediately construct and alter nearly photorealistic three-dimensional sceneries from the comfort of their laptops. Since these technologies make it simple to generate hyperrealistic avatars, they will revolutionize the way artists working on video games and CGI for movies approach their work. For quite some time, AIs have been able to create realistic 2D images. However, 3D scenarios have proven to be more challenging due to the enormous computer power needed. The AI model EG3D, created by a team of Stanford academics, can be used to produce random high-resolution images of faces and other things having an underlying geometric structure. This model is one of the first 3D models now in use to reach rendering quality close to photorealism.

A very popular machine learning method known as a generative adversarial network (GAN) is used by EG3D and its forerunners to create graphics. By utilizing one neural network to create images and another to assess their accuracy, these systems pit two neural networks against one another. Until the outcome is feasible, this process is done multiple times. The researchers created a component that can transform these images for 3D space by combining features from already-existing high-resolution 2D GANs. This two-piece building accomplishes two goals at once. Additionally, it is quick enough to operate in real-time on a laptop and may be used to create complex 3D designs. It is compatible with current architectures and has efficient computing performance.

Although it is possible to use tools like EG3D to create 3D images that are almost lifelike, the issue of how difficult it is to alter them in design software still exists. This is because, despite the outcome being an image that can be seen, it is unclear how the GANs created it. A machine learning model dubbed GiraffeHD developed by the University of Wisconsin–Madison researchers can be helpful in this situation. This model is effective in removing manipulable characteristics from 3D images. It allows the user to choose numerous elements, including shape, color, and the scene or background of the image. GiraffeHD was trained using countless photographs. To construct these images in a way where these many aspects behave like controllable variables, the model searches for latent factors in the image. Users would be able to precisely alter characteristics for desired scenarios by editing these controllable aspects in 3D-generated photos in the future. A more significant trend is employing AI to make 3D photos, including EG3D and Giraffe HD. However, there is still much work to be done regarding algorithmic bias and broader applicability. The type of feeding training data still constrains these models. Research is still being conducted to address these issues.

Though it is still in its early stages, this research opens up possibilities for more realistic 3D images and models. It will be interesting to see where this line of research goes and how it can be applied in the future. I would love to hear your thoughts on this new approach in our ML Subreddit.

Free-2 Min AI NewsletterJoin 500,000+ AI Folks

This Article is written as a summary article by Marktechpost Staff based on the paper 'Efficient Geometry-aware 3D Generative Adversarial Networks'. All Credit For This Research Goes To Researchers on This Project. Checkout the paper, github and reference article.

Please Don't Forget To Join Our ML Subreddit

Credit: Source link

Comments are closed.