Researchers Develop MassMIND: Massachusetts Maritime INfrared Dataset Consisting of Images Captured in Long Wave Infrared (LWIR) Spectrum
Advances in deep learning algorithms have resulted in an exponential increase in research on land vehicle autonomy in recent years. Publicly available labeled datasets, open-source software, publishing innovative deep learning architectures, and increases in hardware computation capabilities have all been significant drivers of this progress. The marine environment, with its plethora of routine duties like monitoring, surveillance, and long-distance transit, offers a significant opportunity for autonomous navigation. The availability of adequate datasets is a critical dependency for achieving autonomy. Sensors, specifically electro-optical (EO) cameras, long-wave infrared (LWIR) cameras, radar, and lidar, aid in collecting large amounts of data about the environment efficiently.
Because of their adaptability and the abundance of Convolutional Neural Network (CNN) designs that learn from tagged pictures, EO cameras are commonly utilized to capture images. The difficulty is understanding this data and producing labeled datasets that can be used to train deep learning models. A picture is typically annotated using two approaches. The first step is to discover things of interest by drawing bounding boxes around them. Second, conceptually segment a picture by assigning a class name to each pixel. The first way is faster because it concentrates on specific targets, while the second method is more refined since it segments the entire scene. Because the marine environment is exposed mainly to the sky and ocean, the lighting conditions change significantly from those on land.
Glitter, reflection, water dynamism, and fog are all frequent phenomena. These circumstances degrade the quality of optical pictures. Horizon detection is another typical issue encountered when employing optical pictures. LWIR pictures, on the other hand, have particular advantages under such severe light situations, as seen in the figure below. LWIR sensors have been utilized in the work of marine robotics researchers. Researchers created a tagged collection of paired visible and LWIR photos of various types of ships in the marine area. However, this dataset has several drawbacks, which are explained in the next section.
This paper presents a dataset of over 2,900 LWIR maritime images from the Massachusetts Bay area, including the Charles River and Boston Harbor, that capture diverse scenes such as cluttered marine environments, construction, living entities, and near shore views at various seasons and times of the day. The photos in the dataset are classified into seven classes using instance and semantic segmentation. They also assess the dataset’s performance across three common deep learning architectures (UNet, PSPNet, and DeepLabv3) and describe their findings regarding obstacle identification and scene perception. The public can freely access the dataset.
Through these datasets, they want to encourage research interest in the topic of perception in maritime autonomy. The hardware assembly used for data gathering is described, along with an elaboration on the dataset and segmentation methods. They also share evaluation findings for the three designs. In conclusion, the paper also describes the state of the art in the marine area.
This Article is written as a research summary article by Marktechpost Staff based on the research paper 'MassMIND: Massachusetts Maritime INfrared Dataset'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and github link. Please Don't Forget To Join Our ML Subreddit
Content Writing Consultant Intern at Marktechpost.
Credit: Source link
Comments are closed.