Harvard Researchers Propose a Self-Supervised Deep Learning Algorithm for Fast and Scalable Search of Whole-Slide Images

The necessity for accurate and economical gigapixel image analysis has risen as whole-slide imaging has become more widely used. Deep learning is at the forefront of computer vision, showing considerable advancements in visual comprehension over earlier approaches. However, whole-slide images (WSI) include billions of pixels and are plagued by many sorts of artifacts as well as significant morphological variation. All of these work against the traditional usage of deep learning. These challenges must be overcome for the clinical translation of deep learning solutions to become a reality.

Most computational pathology approaches use supervised deep learning with slide- or case-level labels to address classification or ranking issues. For many applications, an image search engine that uses the detailed, spatially resolved information in pathology images is far more potent. However, scalability poses a significant obstacle to the widespread and effective use of histology whole-slide image search and retrieval systems. Compared to other image databases, this presents a challenging issue for the WSI retrieval system since it must effectively search an increasing number of slides, each of which may contain billions of pixels and be several gigabytes. Since WSIs are too large to process computationally, most methods either divide them into smaller image patches or concentrate on patch or region of interest (ROI) retrieval that is specialized for specific purposes. Recently, a new article published in the journal nature biomedical engineering suggests a search pipeline called self-supervised image search for histology (SISH) to overcome the problems listed above.

Regardless of the repository size, SISH searches for and retrieves WSIs quickly. It uses a tree data structure for quick searching and only needs slide-level annotations for training. It transforms WSIs into useful discrete latent representations and then ranks them according to an algorithm based on uncertainty. To decrease storage and labeling costs, SISH specifically draws on a collection of preprocessed mosaics from WSIs without pixel-wise or ROI-level labels by relying on indices discovered by self-supervised learning and pretrained embeddings. The proposed approach leverages discrete latent codes from a Vector Quantized-Variational AutoEncoder (VQ-VAE) in addition to the guided search

and ranking algorithms. Indeed, the authors propose to sample a subset of representative patches called a ‘mosaic’ for each WSI using k-means clustering at low resolution by RGB histogram to address the gigapixel size of WSIs. The VQ-VAE is then trained and used on a large dataset following a self-supervised fashion and exploits the learned, discrete latent codes to create integer indices for patches in a WSI mosaic. Instead of comparing the regions of the query WSI with regions of each WSI in the dataset, the proposed method utilizes the Van Emde Boas tree to select a list of potential candidates for each patch in the query mosaic. In the next step, a ranking module is used to find the most promising patches useful for retrieval. 

SISH was assessed on several tasks, including retrieval tasks based on tissue-patch queries and on datasets encompassing more than 22,000 patient cases and 56 disease subtypes, in order to explore the efficacy of the proposed technique. The studies show that SISH is an interpretable histology image search pipeline that, following training with only slide-level labels, achieves constant search speed. The authors further show that SISH performs well on vast and varied datasets, can generalize to independent cohorts and unusual illnesses, and may be utilized as a search engine for image patch retrieval in addition to WSIs.

In this article, we presented SISH, a new approach dealing with whole-slide images via self-supervised deep learning. As opposed to a continuous vector representation, SISH uses a set-based representation of WSI that has more transparency and requires less supervised training. The experimental study has proven the presented method’s effectiveness in many applications.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Fast and scalable search of whole-slide images via self-supervised deep learning'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and code.

Please Don't Forget To Join Our ML Subreddit


Asif Razzaq is an AI Journalist and Cofounder of Marktechpost, LLC. He is a visionary, entrepreneur and engineer who aspires to use the power of Artificial Intelligence for good.

Asif’s latest venture is the development of an Artificial Intelligence Media Platform (Marktechpost) that will revolutionize how people can find relevant news related to Artificial Intelligence, Data Science and Machine Learning.

Asif was featured by Onalytica in it’s ‘Who’s Who in AI? (Influential Voices & Brands)’ as one of the ‘Influential Journalists in AI’ (https://onalytica.com/wp-content/uploads/2021/09/Whos-Who-In-AI.pdf). His interview was also featured by Onalytica (https://onalytica.com/blog/posts/interview-with-asif-razzaq/).


Credit: Source link

Comments are closed.