This Artificial Intelligence (AI) Paper From UC Berkeley Presents A General Navigation Model (GNM) From An Aggregated Multirobot Dataset To Drive Any Robot

Although their presence is not as significant as projected by Sci-Fi movies from the 90s, robots are becoming essential in our daily lives with various applications in various industries and settings. For example, in the healthcare industry, robots are used for surgeries, dispensing medication, and assisting with rehabilitation. In the transportation industry, self-driving cars are beginning to become more widespread. Robots are also used in various other settings, such as agriculture, construction, and even household chores. As technology advances, we can expect more robots to be used in our daily lives.

When you think of an ideal robot, what comes to your mind is probably something that can move freely and perform human-like movements. As much as we would like to see that happen, unfortunately, we are not there yet, as robots still struggle to navigate various environments. Speaking of navigating, have you ever wondered how robots can move around and navigate their environments?

Robot navigation focuses on enabling robots to move around in a given environment. This can involve developing algorithms and systems that allow robots to navigate around obstacles, make decisions about their movements, and interact with their surroundings. 

Meet Hailo-8™: An AI Processor That Uses Computer Vision For Multi-Camera Multi-Person Re-Identification (Sponsored)

Although the goal is the same, navigating the environment with minimal to no issues, the problem itself is tricky due to the heterogeneous nature of robots. There is no standard for designing a robot. Think of yourself; how many of the robots you have seen looked alike? They all have different camera locations, sets of sensors, wheels or legs, etc. Almost nothing is similar between them. Therefore, one needs to design the algorithm specifically for the robot in hand. But this makes using machine learning methods problematic as constructing a large-scale dataset using only a single type of robot is not feasible, and the data would not be enough to train complicated models. 

A large-scale dataset is required to train most modern machine learning models. The Internet-scale datasets enabled the huge leap in natural language processing with transformers, computer vision tasks with diffusion models, etc. Without a large-scale dataset, pushing the boundaries further is impossible. 

So, how can we tackle this heterogeneity issue of robot navigation datasets? How can we utilize all the data we have for different robots and develop a better solution? These were the questions the authors of GNM asked, and they came up with a brilliant solution. A general navigation model to drive any robot.

“A wheeled robot, quadruped, or a drone all have the same abstract objectives: to explore the environment, plan a path to the goal, and avoid collisions.” This quote from the paper describes the idea behind GNM perfectly. The shared goal can enable training a general navigation policy from large-scale data, which could be generalized into novel environments, unseen sensor parameters, and new robot configurations. 

GNM proposes a general omnipolicy from a multi-robot dataset to navigate robots in different settings. A large heterogeneous dataset of navigation trajectories is collected from six different robots in both indoor and outdoor environments. GNM is trained on this dataset and deployed on four different robot platforms. 

Training is done following a standard reinforcement learning approach. However, two modifications are done so that it can work on a multi-robot dataset. First, the predictions are made in a normalized action space of shared abstraction across robots. Second, an embodiment context is used to condition the policy on the capabilities of the robot. 

GNM was a solid step forward in data sharing among different robots. It showed promising results in different settings. 


Check out the Paper, Github, and Project. All Credit For This Research Goes To Researchers on This Project. Also, don’t forget to join our Reddit page and discord channel, where we share the latest AI research news, cool AI projects, and more.


Ekrem Çetinkaya received his B.Sc. in 2018 and M.Sc. in 2019 from Ozyegin University, Istanbul, Türkiye. He wrote his M.Sc. thesis about image denoising using deep convolutional networks. He is currently pursuing a Ph.D. degree at the University of Klagenfurt, Austria, and working as a researcher on the ATHENA project. His research interests include deep learning, computer vision, and multimedia networking.


Credit: Source link

Comments are closed.