Google and MIT Researchers Introduce Synclr: A Novel AI Approach for Learning Visual Representations Exclusively from Synthetic Images and Synthetic Captions without any Real Data

Raw and frequently unlabeled data can be retrieved and organized using representation learning. The ability of the model to develop a good representation depends on the quantity, quality, and diversity of the data. In doing so, the model mirrors the data’s inherent collective intelligence. The output is directly proportional to the input. Unsurprisingly, the most effective visual representation learning algorithms nowadays depend on massive real-world datasets. Real data collecting, meanwhile, has its own set of challenges. Collecting vast amounts of unfiltered data is feasible since it is not expensive. Adding uncurated data has less impact at large data scales, indicating poor scaling behavior for self-supervised representation learning using this approach. Collecting curated data on a smaller scale is also possible, although models trained using this method can only handle very specific jobs. 

To reduce the financial burden, new research by Google Research and MIT CSAIL investigates whether large-scale curated datasets that can train state-of-the-art visual representations may be achieved using synthetic data derived from commercially available generative models. Learning from models describes this approach, which differs from learning directly from data. The team takes advantage of the new controls provided by models’ latent variables, conditioning variables, and hyperparameters to curate data in the proposed method, one of the numerous benefits of using models as a data source for constructing large-scale training sets. Because models are less bulky than data, they are easier to store and share. Moreover, models can generate endless data samples, albeit with limited variability. 

In this study, the researchers rethink the level of detail in visual classes by using generative models. For instance, consider the four pictures of the following commands: “A cute golden retriever sits in a house made of sushi” and “A golden retriever, wearing sunglasses and a beach hat, rides a bike.” By separating the embeddings for various images without explicitly considering the same semantics, traditional self-supervised methods like SimCLR will treat each image as a separate class. Yet, supervised learning algorithms (like SupCE) will treat all of these pictures as belonging to the same class (like “golden retriever”). 

Since collecting several images described by a given caption is non-trivial, particularly when scaling up the number of captions, this level of granularity is challenging to mine in real data. On the other hand, this capability is intrinsic to text-to-image diffusion models; with the same caption as a training set and varying noise inputs, these models can generate many images that exactly match the caption. 

The work’s findings show that compared to SimCLR and supervised training, the granularity at the caption level is superior. The fact that this visual class description is easily extensible is an additional perk. Online class (or data) augmentation allows hypothetically scaling up to unlimited classes, unlike ImageNet-1k/21k, where a fixed number of classes is used.  There are three stages to the proposed system:

  1. Synthesizing a big collection of picture captions is the initial stage. Using word-to-caption translation examples, the team has developed a scalable method that takes advantage of the in-context learning capacity of large language models (LLMs). 
  2. The next step is to create many synthetic images and captions using a text-to-image diffusion model. A dataset of 600 million photos is generated in this way. 
  3. Finally, they train models for visual representations using masked image modeling and multi-positive contrastive learning. 

The researchers compare OpenAI’s CLIP regarding top-1 linear probing accuracy on ImageNet-1K with the ViT-B model at 80.7% and the ViT-L model at 83.0%, both trained with SynCLR pre-training. On fine-grained classification tasks, SynCLR achieves results comparable to those of DINO v2 models derived from a pre-trained ViT-g model, surpassing CLIP for ViT-B by 3.3% and ViT-L by 1.5%. Regarding semantic segmentation on ADE20k, SynCLR beats MAE pre-trained on ImageNet by 6.2 and 4.1 in mIoU for ViT-B and ViT-L, respectively, in the same setup. This demonstrates that SynCLR has a strong capacity to transfer to dense prediction tasks, much like DINO v2, which also requires training on images with a resolution of 518×518—something that SynCLR does not possess.

The team highlights that there are several ways to improve caption sets. For example, they use more sophisticated LLMs, improve the sample ratios among distinct concepts, and expand the library of in-context examples. One way to improve the learning process is to add a high-resolution training phase or an intermediate IN-21k fine-tuning stage after extracting knowledge from a bigger model. They also suggest that in conjunction with SwiGLU and LayerScale integration, better model initialization procedures can lead to architectural benefits. Nevertheless, they suggest these areas for future research because of limited resources and the limitations of this paper, which did not aim to achieve the highest possible metrics. 


Check out the Paper and Github. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 35k+ ML SubReddit, 41k+ Facebook Community, Discord ChannelLinkedIn GroupTwitter, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Dhanshree Shenwai is a Computer Science Engineer and has a good experience in FinTech companies covering Financial, Cards & Payments and Banking domain with keen interest in applications of AI. She is enthusiastic about exploring new technologies and advancements in today’s evolving world making everyone’s life easy.


🐝 Get stunning professional headshots effortlessly with Aragon- TRY IT NOW!.


Credit: Source link

Comments are closed.