Penn State Researchers Propose ‘ESFPNet,’ An Effective Deep Learning Network for Real-Time Lesion Segmentation in Autofluorescence Bronchoscopic Video





The leading cancer mortality globally is Lung Cancer. A key objective for increasing lung cancer survival is discovering the illness early, allowing for the most effective treatment choices. Lung cancer develops from lesions in the bronchial epithelium of the lung mucosa. These bronchial lesions can progress to squamous cell lung cancer and assist in forecasting other lung cancers’ development. As a result, approaches for early diagnosis of bronchial lesions are critical for improving lung cancer patient treatment. Using bronchoscopy to image the airway epithelium during a regular airway exam is a noninvasive technique for clinicians to look for such lesions.

Autofluorescence bronchoscopy is one of the most sensitive advanced bronchoscopic video procedures available today. It can efficiently distinguish growing bronchial lesions from the normal epithelium. Unfortunately, the current standard requires human inspection of an incoming AFB video stream, which is time-consuming and error-prone. While some research has looked toward computer-based lesion analysis approaches for AFB video frames, all of these studies have one or more limitations as follows: 

  1. Complicated image preprocessing is required before making lesion judgments.
  2. Do not offer reliable, real-time segmentation of aberrant lesion zones as a tool for finding prospective lesions.
  3. The approaches cannot interpret an input AFB video stream in real-time, rendering them inappropriate for making lesion determinations during a live bronchoscopic airway exam.

Researchers believe this is the first time someone has used AFB video for automated real-time segmentation of bronchial lesions. Furthermore, their proposed efficient stage-wise feature pyramid (ESFP) encoder on Mixtransformer (MiT) with SOTA performances on public datasets demonstrates a significant capacity for medical picture segmentation.

Their architecture is depicted in the figure below. It employs the Mix Transformer (MiT) encoder as the backbone and an efficient stage-wise feature pyramid (ESFP) decoder to create segmentation outputs. 

Source: https://arxiv.org/pdf/2207.07759v2.pdf

GitHub has the official implementation of this paper.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'ESFPNet: efficient deep learning architecture for real-time lesion segmentation in autofluorescence bronchoscopic video'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and github link.

Please Don't Forget To Join Our ML Subreddit


Content Writing Consultant Intern at Marktechpost.







Previous articleGoogle’s New Open-Source Project, ‘SayCan,’ is an AI Tool That Uses a Large Language Model to Plan Sequences of Robotic Actions to Achieve a User-Specified Goal


Credit: Source link

Comments are closed.