Chosun University Researchers Introduce a Machine Learning Framework for Precise Localization of Bleached Corals Using Bag-of-Hybrid Visual Feature Classification
The most diversified marine environment on Earth is said to be found in coral reefs. Over 4,000 kinds of fish may be found in the coral reefs, home to an estimated 25% of all marine life. In coral, underwater parasite algae, or zooxanthellae, produces vibrant calcium carbonate structures known as reefs. When the water temperature rises, and algae escape from the coral’s tissue, the coral reef bleaches. Coral reef bleaching is linked to several environmental and economic problems. Because of the extremely high summertime sea surface temperature (SST), global warming is the primary cause of bleaching. In Australia’s Great Barrier Reef in 2016, bleaching killed 29–50% of the coral.
Moreover, bleaching raises the CO2 levels in the world’s seas daily, making the environment more acidic and making it harder for other corals and marine life to form skeletons. Reefs are home to various marine life and contain many medicinal substances that can treat many of the world’s most serious illnesses. Monitoring and surveying marine ecology is necessary to mitigate the consequences of climate change. Due to artifacts and ambient noise in the underwater picture, the computer vision system finds it challenging to discriminate between the target item in the foreground and the background. Thus, techniques for improving underwater images have been created.
By first transforming photos into the HSI model and then extending the saturation and intensity components of the image, the integrated color model (ICM) and the unsupervised color correction method (UCM) improve contrast. Researchers studying artificial intelligence (AI) want to create a reliable and computationally effective way to locate bleached coral reefs. However, differences in lighting, size, orientation, perspective, occlusions, and background clutter degrade the performance of their localization models. The camera’s depth, the mount’s location, and the fluctuating light sources in the surveillance area are responsible for the changes in the object’s scale, perspective, and lighting, respectively.
Researchers from Chosun University in this project aim to create deep learning and handmade feature extraction methods that can withstand the geometric and visual variances found in photos of maritime environments. While appearance-based characteristics include an object’s texture and color details, geometric features primarily rely on the local distribution of curves and edges that form an object’s shape inside the image. Variations in lighting, size, orientation, perspective, occlusions, and background clutter affect appearance features and geometry. In most classification jobs, manual feature extractors are replaced by deep neural network (DNN) models.
Due to their domain independence and extensive dataset training, deep neural networks (DNNs) like ResNet, DenseNet, VGGNet, and Inceptions models achieve unparalleled performance across various applications. Because there are fewer bleached examples in the current datasets, the DNN overfits, which compromises the robustness and uniqueness of the features. However, the robustness and uniqueness of the handmade feature are independent of the strength of the training data. The handmade feature’s invariance is nevertheless impacted by changes in depth, underwater light, and water turbidity, even with noise robustness. The project aims to create an invariant feature extraction model that is resistant to changes in coral picture geometry and photometry.
The suggested framework uses hybrid handmade and DNN techniques to extract raw features, and then the BoF reduces and introduces more invariance to increase classification accuracy. The suggested model uses local characteristics from the picture rather than global features to improve photometric invariance. Moreover, the suggested architecture’s use of a bag of features lowers the raw hybrid feature vector’s dimension, which reduces complexity and the need for storage. After much trial and error, the ideal patch, cluster size, kernel combination, and classifier have been determined.
Check out the Paper. All credit for this research goes to the researchers of this project. Also, don’t forget to join our 33k+ ML SubReddit, 41k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
Aneesh Tickoo is a consulting intern at MarktechPost. He is currently pursuing his undergraduate degree in Data Science and Artificial Intelligence from the Indian Institute of Technology(IIT), Bhilai. He spends most of his time working on projects aimed at harnessing the power of machine learning. His research interest is image processing and is passionate about building solutions around it. He loves to connect with people and collaborate on interesting projects.
Credit: Source link
Comments are closed.