Researchers at University of Arizona Introduce a New Method to Automatically Generate Radar-Camera Datasets for Deep Learning Applications

Source: https://ieeexplore.ieee.org/document/9690006/authors#authors

In recent years, scientists have been working on a variety of systems that can identify and traverse items in their environment. Most of these systems rely on deep learning and machine learning algorithms that use radar and necessitate a large amount of labeled training data.

Despite the enormous advantages of radars over optical sensors, there are currently very few image datasets available for training that comprise data obtained using radar sensors. Labeling radar data is a time- and labor-intensive procedure that is often carried out by manually comparing it to a parallelly acquired image data stream. Furthermore, many open-source radar datasets available are difficult to use for various user applications. 

To overcome the issue of data scarcity, University of Arizona researchers have devised a new method for automatically generating datasets with tagged radar data-camera images. It labels the radar point cloud using an object recognition algorithm (YOLO) on the camera image stream and an association technique (the Hungarian algorithm).

The approach works on the idea of using an image-based object detection framework to automatically label the radar data instead of manually looking at images if the camera and radar were staring at the same item. 

The approach’s co-calibration, grouping, and association capabilities are three distinguishing properties. The method co-calibrates a radar and its camera to identify how the location of an object detected by the radar would translate into digital pixels on a camera.

They employed a density-based clustering scheme to detect and remove noise/stray radar returns and to segregate radar signals into clusters to discriminate between separate objects.

They employed a Hungarian intra-frame and inter-frame method for the association. In a single frame, the intra-frame HA linked YOLO predictions to co-calibrated radar clusters. On the other hand, the inter-frame HA linked radar clusters for the same object across frames to account for labeling radar data in frames even when optical sensors failed intermittently.

Instead of simply employing the point-cloud distribution or just the micro-doppler data, they suggest using an effective 12-dimensional radar feature vector.

In the future, this approach could aid in the automated production of radar-camera and radar-only datasets. Furthermore, the researchers looked at both proof-of-concept classification techniques based on a radar-camera sensor-fusion and data acquired only by radars.

The team believes that their work is to quickly analyze and train deep-learning models for classifying or tracking objects utilizing sensor fusion. These models can improve the performance of a wide range of robotic systems, from autonomous automobiles to small robots.

Paper: https://ieeexplore.ieee.org/document/9690006

Reference: https://techxplore.com/news/2022-02-method-automatically-radar-camera-datasets-deep.html

Credit: Source link

Comments are closed.