Google AI Introduces SANPO: A Multi-Attribute Video Dataset for Outdoor Human Egocentric Scene Understanding
For tasks like self-driving, the AI model must understand not only the 3D structure of the roads and sidewalks but also identify and recognize street signs and stop lights. This task is made easier with a special laser mounted on the car that captures the 3D data. Such a process is called egocentric scene understanding, i.e., comprehending the environment from one’s own perspective. The problem is that there aren’t publicly available datasets beyond the autonomous driving domain that generalize to egocentric human scene understanding.
Researchers at Google have introduced SANPO (Scene understanding, Accessibility, Navigation, Pathfinding, Obstacle avoidance) dataset, which is a multi-attribute video dataset for human egocentric scene understanding. SANPO consists of both real-world as well as synthetic data, called SANPO-Real and SANPO-Synthetic, respectively. SANPO-Real covers diverse environments and has videos from two stereo cameras to support multi-view methods. The real dataset also includes 11.4 hours of video captured at 15 frames per second (FPS) with dense annotations.
SANPO is a large-scale video dataset for human egocentric scene understanding, consisting of more than 600K real-world and more than 100K synthetic frames with dense prediction annotations.
Google’s researchers have prioritized privacy protection. They’ve collected data while following the laws at the local, city, and state levels. They’ve also made sure to remove any personal information, like faces and vehicle license plates, before sending the data for annotation.
To overcome the imperfections while capturing videos, such as motion blur, human rating mistakes, etc., SANPO-Synthetic was introduced to augment the real dataset. The researchers partnered with Parallel Domain to create a high-quality synthetic dataset optimized to match real-world conditions. SANPO-Synthetic consists of 1961 sessions, which were recorded using virtualized Zed cameras having an even split between head-mounted and chest-mounted positions.
The synthetic dataset and a part of the real dataset have been annotated using panoptic instance masks, which assigns a class and an ID to each pixel. In SANPO-Real, only a few frames have more than 20 instances per frame. On the contrary, SANPO-Synthetic features many more instances per frame than the real dataset.
Some of the other important video datasets in this field are SCAND, MuSoHu, Ego4D, VIPSeg, and Waymo Open. SANPO was compared to these datasets, and it is the first dataset with panoptic masks, depth, camera pose, multi-view stereo, and both real and synthetic data. Apart from SANPO, only Waymo Open has both panoptic segmentation and depth maps.
The researchers trained two state-of-the-art models – BinsFormer (for depth estimation) and kMaX-DeepLab (for panoptic segmentation), on the SANPO dataset. They observed that the dataset is quite challenging for both the dense prediction tasks. Moreover, the synthetic dataset has better accuracy than the real dataset. This is mainly because real-world environments are quite complex compared to synthetic data. Additionally, segmentation annotators are more precise in the case of synthetic data.
Introduced to tackle the lack of datasets for human egocentric scene understanding, SANPO is a significant advancement that encompasses both real-world and synthetic datasets. Its dense annotations, multi-attribute features, and unique combination of panoptic segmentation and depth information set it apart from other datasets in the field. Furthermore, the researchers’ commitment to privacy allows the dataset to support fellow researchers in creating visual navigation systems for the visually impaired and push the boundaries of advanced visual scene understanding.
Check out the Paper and Google Blog. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 31k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.
If you like our work, you will love our newsletter..
We are also on WhatsApp. Join our AI Channel on Whatsapp..
I am a Civil Engineering Graduate (2022) from Jamia Millia Islamia, New Delhi, and I have a keen interest in Data Science, especially Neural Networks and their application in various areas.
Credit: Source link
Comments are closed.