AI Researchers At Mayo Clinic Introduce A Machine Learning-Based Method For Leveraging Diffusion Models To Construct A Multitask Brain Tumor Inpainting Algorithm
The number of AI and, in particular, machine learning (ML) publications related to medical imaging has increased dramatically in recent years. A current PubMed search using the Mesh keywords “artificial intelligence” and “radiology” yielded 5,369 papers in 2021, more than five times the results found in 2011. ML models are constantly being developed to improve healthcare efficiency and outcomes, from classification to semantic segmentation, object detection, and image generation. Numerous published reports in diagnostic radiology, for example, indicate that ML models have the capability to perform as good as or even better than medical experts in specific tasks, such as anomaly detection and pathology screening.
It is thus undeniable that, when used correctly, AI can assist radiologists and drastically reduce their labor. Despite the growing interest in developing ML models for medical imaging, significant challenges can limit such models’ practical applications or even predispose them to substantial bias. Data scarcity and data imbalance are two of these challenges. On the one hand, medical imaging datasets are frequently much more minor than natural photograph datasets such as ImageNet, and pooling institutional datasets or making them public may be impossible due to patient privacy concerns. On the other hand, even the medical imaging datasets that data scientists have access to could be more balanced.
In other words, the volume of medical imaging data for patients with specific pathologies is significantly lower than for patients with common pathologies or healthy people. Using insufficiently large or imbalanced datasets to train or evaluate a machine learning model may result in systemic biases in model performance. Synthetic image generation is one of the primary strategies to combat data scarcity and data imbalance, in addition to the public release of deidentified medical imaging datasets and the endorsement of strategies such as federated learning, enabling machine learning (ML) model development on multi-institutional datasets without data sharing.
Generative ML models can learn to generate realistic medical imaging data that does not belong to an actual patient and can thus be publicly shared without jeopardizing patient privacy. Various generating models capable of synthesizing high-quality synthetic data have been introduced since generative adversarial networks (GANs) emerged. Most of these models produce unlabeled imaging data, which may be helpful in specific applications, such as self-supervised or semi-supervised downstream models. Furthermore, some other models are capable of conditional generation, which allows an image to be generated based on predetermined clinical, textual, or imaging variables.
Denoising Diffusion Probabilistic Models (DDPMs), also known as diffusion models, are a new class of image generation models that outperform GANs regarding synthetic image quality and output diversity. The latter class of generative models allows for the generation of labeled synthetic data, which advances machine learning research, medical imaging quality, and patient care. Despite their enormous success in generating synthetic medical imaging data, GANs are frequently chastised for their lack of output diversity and unstable training. Autoencoder deep learning models are a more traditional alternative to GANs because they are easier to train and produce more diverse outputs. Still, their synthetic results lack the image quality of GANs.
Diffusion models based on Markov chain theory learn to generate their synthetic outputs by gradual denoising an initial image packed with random Gaussian noise. This iterative denoising process causes diffusion models’ inference runs to be significantly slower than those of other generative models. Still, it allows them to extract more representative features from their input data, allowing them to outperform other models. They present a proof-of-concept diffusion model that can be used for multitask brain tumor inpainting on multi-sequential brain magnetic resonance imaging (MRI) studies in this methodological paper.
They created a diffusion model that can receive a two-dimensional (2D) axial slice from a T1-weighted (T1), contrast-enhanced T1-weighted (T1CE), T2-Weighted (T2), or FLAIR sequence of a brain MRI and inpaint a user-defined cropped area of that slice with a realistic and controllable image of either a high-grade glioma and its corresponding components (e.g., the surrounding edema), or tumor-less (apparently normal) brain tissues.
In the United States, the incidence of high-grade glioma is 3.56 per 100,000 people, and there are only a few publicly available MRI datasets for brain tumors. Their model will allow ML researchers to edit (induce or remove) synthetic tumoral or tumor-less tissues with configurable features on brain MRI slices in such limited data. The tool has been deployed online for people to use. The model has been open-sourced along with its documentation on GitHub.
This Article is written as a research summary article by Marktechpost Staff based on the research paper 'MULTITASK BRAIN TUMOR INPAINTING WITH DIFFUSION MODELS: A METHODOLOGICAL REPORT'. All Credit For This Research Goes To Researchers on This Project. Check out the paper, code and tool.
Please Don't Forget To Join Our ML Subreddit
Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.
Credit: Source link
Comments are closed.