Researchers from the University of Chicago Introduce 3D Paintbrush: A AI Method for Generating Local Stylized Textures on Meshes Using Text as Input

3D paint brushes are usually used in 3D modeling or sculpting applications to create and manipulate 3D objects or models. These tools allow users to paint directly onto 3D surfaces, adding textures, colors, and details to the models. This process is fundamental in creating realistic textures, adding intricate details, and bringing life to 3D objects in various industries like gaming, animation, film, and more.

Highly detailed and accurate localizations are important for constraining the edits within a specific region, preventing changes unrelated to the target editing. A technique called meshes with texture maps is usually used. A texture map is a 2D image or set of images wrapped around a 3D model’s surface to provide details such as color, surface patterns, roughness, shininess, and other visual characteristics. 3D structures composed of vertices, edges, and faces form the shape of an object.

Researchers from the University of Chicago and Snap Research developed a 3D paintbrush for automatically texturing local semantic regions on meshes via text description. Their method is designed to operate directly on meshes, producing texture maps that seamlessly integrate into standard graphics pipelines. 3D Paintbrush is controlled via intuitive, free-form text input, allowing users to describe their edits using open vocabulary on various meshes.

They also developed cascaded score distillation (CSD) to enhance the details and resolution of local textured areas. Using it, they could modify the localization region and deform the geometry within the localized region. They used neural fields encoded by multi-layer perceptions to represent localization and texture maps. This localization explicitly marks the texture and ensures a coherent local style to the localized boundary.

Explicitly learning a localization in tandem with the texture ensures that the localized region bounds the edits. They say that the 3D Paintbrush’s localizations are sharper than those the existing models produce. Users can use their CSD to control the granularity and global understanding of the supervision, achieving higher resolution textures and localizations than other SDS. 

Their method uses MLPs defined on the 3D surfaces to create a neural texture that produces smoothly varying outputs in 3D. This can also be done when 2D texture maps have discontinuities at the texture seams. The smoothness provided by the MLPs reduces artifacts, produces less noisy textures, and includes super-resolution capabilities.

The team simultaneously optimized the localization and texture region in tandem. They found simultaneous optimization results in highly detailed textures that effectively conform to the predicted localization regions. The predicted localization regio is sharp and intricate. In the future, they want to extend localized editing capabilities beyond texturing and learning to co-texture multiple shapes using the same local texture map to provide correspondences between shapes.


Check out the Paper and Project. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Arshad is an intern at MarktechPost. He is currently pursuing his Int. MSc Physics from the Indian Institute of Technology Kharagpur. Understanding things to the fundamental level leads to new discoveries which lead to advancement in technology. He is passionate about understanding the nature fundamentally with the help of tools like mathematical models, ML models and AI.


↗ Step by Step Tutorial on ‘How to Build LLM Apps that can See Hear Speak’

Credit: Source link

Comments are closed.