Google AI Introduces ‘Mood Board Search’: A Web-Based Tool That Lets You Train A Computer To Recognize Visual Concepts Using Mood Boards And Machine Learning
On freely accessible, searchable websites, there is an abundance of visual data available. Thanks to recent developments in computer vision and natural language processing, users can now search through these photos simultaneously using a camera, voice, text, image, or many modalities. However, it is still difficult to investigate these images using purely subjective ideas like visual tones or moods. In this direction, Google Researchers have worked with artists, photographers, and picture researchers to study how machine learning ML may help individuals utilize expressive searches to examine datasets visually. Google recently launched an outcome of this partnership, Mood Board Search, a new ML-powered research tool that leverages mood boards as a query over image collections. With the help of this tool, users can independently define and evoke visual notions. A mood board search can be used for ambiguous inquiries, such as “peaceful,” or for words and specific images that might not be exact enough to yield beneficial results in a regular search. These subjective questions primarily concern abstract information that is frequently ignored in pictures. The team is still in the developing phase of the research tool.
The research team aimed to create a flexible and approachable interface for people without ML experience to train a computer to recognize a visual notion as they see it. As a result, Mood Board Search was created. The tool’s UI was heavily influenced by mood boards, which professionals in the creative industries frequently use to convey the essence of a concept through a selection of visual aids. Mood board searches produce the most remarkable results when the images have a similar visual style. These results are more likely pertinent to mood boards with visual similarities in colour, pattern, texture, or composition. Another feature allows users to indicate which images are more crucial to a visual concept by giving them more or less weight or adding images that contradict it. This allows consumers to examine and study search results to choose which image most closely resembles the visual concept. Mood Board Search uses existing pre-trained computer vision models like GoogLeNet and MobileNet and a machine learning technique known as concept activation vectors (CAVs). In the embedding space of a neural network, CAVs allow machines to represent visuals using numbers or directions. CAVs and Testing with CAVs (TCAV) measure how vital a user-defined concept is to a categorization result. This study methodology was initially made publicly available by Google and has subsequently been extensively used in science and medical applications to create software that can better explain what machines view.
Every mood board generates a CAV, which is essentially an embedding space direction. CAVs are used in Mood Board Search to determine a model’s responsiveness to a user-created mood board. Afterwards, the tool searches an image dataset and displays the images most closely fit the CAV. One of its unique features is the tool’s ability to segment each image in the dataset in 15 ways to find the most pertinent compositions. With three collaboratively created graphic designs and a Mood Board Search Python Library, Google has made Mood Board Search open source. Even though these tools are still in the prototype stage, the researchers think they have a wide range of uses, from examining disorganized image collections to externalizing modes of perception into shared and collaborative objects. Future work will expand ML models and inputs to allow even deeper subjective findings, independent of medium, and learning about new types of human-machine collaboration utilizing mood board search.
This Article is written as a summary article by Marktechpost Staff based on the research article 'Enabling Creative Expression with Concept Activation Vectors'. All Credit For This Research Goes To Researchers on This Project. Checkout the code and tool. Please Don't Forget To Join Our ML Subreddit
Credit: Source link
Comments are closed.