Nasa’s SpaceML Tool Introduces A New Six-Stage Pipeline To Automate The Classification Of Meteors From Non-Meteors Using Machine Learning
This article summary is inspired from this article 'How CAMS, the Cameras for Allsky Meteor Surveillance Project, detects long-period comets through machine learning' and paper 'SpaceML: Distributed Open-source Research with Citizen Scientists for the Advancement of Space Technology for NASA' Please Don't Forget To Join Our 1600+ ML Subreddit
Long-period comets (LPCs) are comets with orbital periods longer than 200 yr. According to scientists, they are challenging to discover and pose a significant threat to the Earth’s ecosystem. Therefore, it has gathered much interest from scientists who aim to provide early notice of a probable impact.
Cameras for All-sky Meteor Surveillance (CAMS) project started online in October 2010 at the Fremont Peak Observatory in California with the installation of low-light camera systems. For this, images were recorded from these cameras using a specially developed compression system. They are then passed through a software utility designed to detect the presence of meteors. It allowed researchers to spot evidence of long-period comets that other observation approaches could miss.
Despite using software, the monitoring procedure needed a significant amount of human intervention, where data was only taken from the locations every two months. Daily updates, in which a night’s observations are made available to the scientific community the next day for examination, would necessitate a different approach: machine learning.
The existing artificial intelligence pipeline accessible to CAMS sites is designed to reduce the amount of work that a human operator is required to do.
New research by SpaceML, an extension of the NASA Frontier Development Lab AI accelerator, now introduces a new six-stage pipeline. Researchers employed machine learning and deep learning technologies to enhance and automate the classification of meteors from non-meteors. Their goal was to eliminate the human factor from the CAMS data handling pipeline while maintaining the accuracy of the processed findings. The researchers explain the stages as follows:
1. First stage: Local devices at operator locations that capture sky data undertake local processing to evaluate whether a reported object is a meteor or a non-meteor. They also process the latter, including clouds, planes, and birds, which could cause a false detection in the system. They achieved precision and recall ratings of high-eighty and low-ninety percent. The team used the following:
- A random forest classifier with a binary meteor or non-meteor classification
- Convolutional neural network that outputs a probability score for a series of image frames
- A long-short term memory (LSTM) network is designed to predict the likelihood of light curve tracklets corresponding to a meteor.
2. Second Stage – Data Retrieval: The data is then retrieved from the remote site, which used to need bi-monthly in-person visits and the pickup of physical DVD media on which the data had been burned.
3. Third stage – Processing: It is performed by Python scripts that interact with and automate CAMS’ existing software stack, including MeteorCal, installed cameras, and star observations.
4. Fourth stage – Calculation of coincidence: This process takes confirmed meteors and combines data from several cameras to create a trajectory, recognizing and automatically correcting irregularities in the video recording that could lead to inaccuracies. The automated approach is aimed to reduce the amount of human work involved in the process by using classifiers that look at light curve shapes and maximum mistakes in geographic positions.
5. Fifth stage – Data clustering: It involves identifying outbursts and new showers that could suggest the presence of a long-period comet that was previously undiscovered. The pipeline can detect previously unidentified meteor shower groups and potential meteor outbursts by using the t-Stochastic Neighbor Embedding (t-SNE) approach to unsupervised machine learning to process parameters. It is followed by density-based spatial clustering of applications with noise (DBSCAN) for grouping identification.
6. Sixth Stage – Visualization step: CAMS data that has gone through all five of the previous steps is transformed into a more accessible form to make it more widely available. This final stage takes the data and puts it into a freely rotatable sphere. This sphere can be viewed using custom-written JavaScript in any current web browser, including smartphones and tablets.
This capacity to compare present activity to historical data makes it simple to spot unexpected behavior and new rains. The proposed approach has expanded its reach to include data from observation stations with as few as one or two cameras and processing it through the same automated pipeline, allowing the project to expand its sky surveillance coverage.
References:
https://arxiv.org/abs/2012.10610
https://www.wevolver.com/article/how-cams-the-cameras-for-allsky-meteor-surveillance-project-detects-long-period-comets-through-machine-learning
https://spaceml.org/
Credit: Source link
Comments are closed.