Meet Drag Your GAN: An Interactive Point-based Manipulation on the Generative Image Manifold

Deep generative models, including generative adversarial networks (GANs), have produced random photorealistic pictures with unparalleled success. Controllability over the composite visual material is crucial for learning-based picture synthesis approaches in real-world applications. For instance, social media users may want to change the location, shape, expression, and body pose of a person or animal in a casual photograph; expert media editing and movie pre-visualization may call for quickly sketching out scenes with specific layouts; and car designers may want to change the shape of their designs interactively. 

An ideal controlled image synthesis technique should have the following qualities to suit these various user needs. 1) Flexibility: It should be able to regulate many spatial characteristics, such as the created items’ or animals’ location, stance, form, expression, and arrangement; 2) Accuracy: It must be able to manage spatial features with great accuracy; 3) Generality: It must apply to a variety of object types without being restricted to a single one. While earlier works only fully satisfied one or two of these characteristics, this work aims to fulfil them fully. Most earlier methods used supervised learning, which uses manually annotated data or previous 3D models to train GANs controllably. 

Figure 1: Users of DragGAN may “drag” any GAN-generated image’s content around. Only a few handle points (red) and target points (blue) need to be clicked by users for the method to accurately shift the handle points to the matching target locations. The flexible zone (brighter area) can alternatively be covered with a mask, leaving the remainder of the image fixed. With the use of this adaptable point-based manipulation, various object categories can have control over a variety of spatial qualities, including stance, form, expression, and arrangement. 

Text-guided picture synthesis has come to light recently. Because of this, these methods sometimes only manage a few spatial features or give the user little control over the editing process. They also need to generalize to new object categories. However, text guidance needs to improve flexibility and precision when modifying spatial features. For instance, it cannot be used to shift an item a certain amount of pixels. In this study, the authors investigate a potent yet underutilized interactive point-based manipulation to obtain flexible, precise, and general controllability of GANs. Users may click as many handle points and target points as they like on the picture, and the objective is to move the handle points toward the appropriate target points.

🚀 JOIN the fastest ML Subreddit Community

The method that examines dragging-based manipulation, UserControllableLT, has a setup that is most similar to ours. As seen in Fig. 1, this point-based manipulation is independent of object categories and gives users control over various spatial properties. The issue discussed in this study has two new difficulties in comparison to that one: They do two things: 1) take into account the management of many points, which their technique struggles to achieve, and 2) demand that the handle points precisely reach the target points, which their approach fails to do. They will demonstrate in experiments that manipulating several points with precise position control enables far more complex and accurate image alteration. 

Researchers from Max Planck Institute for Informatics, MIT CSAIL, and Google AR/VR suggest DragGAN, which handles two sub-problems, including 1) overseeing the handle points to move towards the targets and 2) tracking the handle points so that their locations are known at each editing step to enable such interactive point-based manipulation. Their method is based on the fundamental observation that a GAN’s feature space has enough discriminative power to support motion supervision and accurate point tracking. In particular, a shifting feature patch loss that optimizes the latent code provides motion supervision. Point tracking is then carried out using the closest neighbor search in the feature space, as each optimization step causes the handle points to move nearer to the objectives. 

This optimization procedure is repeated until the handle points hit the goals. DragGAN allows users to sketch a region of interest to accomplish area-specific editing. DragGAN achieves efficient manipulation, usually just requiring a few seconds on a single RTX 3090 GPU because it does not depend on any different networks like RAFT. This enables real-time, interactive editing sessions where users swiftly loop through several layouts to produce the desired results. On various datasets, including those involving animals (lions, dogs, cats, and horses), people (facial and full body), autos, and landscapes, they thoroughly examine DragGAN. 

Their method successfully transfers the user-defined handle points to the target points, as seen in Fig. 1, resulting in various manipulation effects across several object types. Their shape deformation is carried out on the learned image manifold of a GAN, which tends to obey the underlying object structures, in contrast to conventional shape deformation methodologies that merely apply to warp. They can deform in accordance with the stiffness of the object, such as the bending of a horse leg, and hallucinate obscured material, such as the teeth inside a lion’s mouth. Additionally, they provide a GUI allowing people to engage with the alteration by clicking on the image. 

Comparative analysis, both qualitative and quantitative, supports their approach’s superiority over UserControllableLT. Additionally, their GAN-based point-tracking technique beats other point-tracking strategies like RAFT and PIPs for GAN-generated frames. Furthermore, their method works well as a potent tool for actual picture modification when combined with GAN inversion techniques.


Check out the Paper and Project Page. Don’t forget to join our 21k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.


➡️ Meet Bright Data: The World’s #1 Web Data Platform

Credit: Source link

Comments are closed.