MIT uses liquid neural networks to teach drones navigation skills

Listen to this article

Voiced by Amazon Polly

A team of researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has introduced a method for drones to master vision-based fly-to-target tasks in intricate and unfamiliar environments. The team used liquid neural networks that continuously adapt to new data inputs. 

MIT CSAIL’s team found that the liquid neural networks performed strongly in making reliable decisions in unknown domains like forests, urban landscapes and environments with added noise, rotation and occlusion. The networks even outperformed many state-of-the-art counterparts in navigation tasks and the team hopes it could enable potential real-world drone applications like search and rescue, delivery and wildlife monitoring. 

“We are thrilled by the immense potential of our learning-based control approach for robots, as it lays the groundwork for solving problems that arise when training in one environment and deploying in a completely distinct environment without additional training,” Daniela Rus, CSAIL director and the Andrew (1956) and Erna Viterbi Professor of Electrical Engineering and Computer Science at MIT, said. “Our experiments demonstrate that we can effectively teach a drone to locate an object in a forest during summer, and then deploy the model in winter, with vastly different surroundings, or even in urban settings, with varied tasks such as seeking and following. This adaptability is made possible by the causal underpinnings of our solutions. These flexible algorithms could one day aid in decision-making based on data streams that change over time, such as medical diagnosis and autonomous driving applications.”

The team’s new class of machine-learning algorithms captures the casual structure of tasks from high-dimensional, unstructured data, like pixel inputs from a drone-mounted camera. The liquid neural networks then extract the crucial aspects of the task and ignore irrelevant features, allowing acquired navigation skills to transfer targets seamlessly to new environments. 

In their research, the team found that liquid networks offered promising preliminary indications of their ability to address a crucial weakness in deep machine-learning systems. Many machine learning systems struggle with capturing causality, frequently over-fit their training data and fail to adapt to new environments or changing conditions. These problems are especially prevalent for resource-limited embedded systems, like aerial drones, that need to traverse varied environments and respond to obstacles instantaneously. 

The system was first trained on data collected by a human pilot to see how it transferred learned navigation skills to new environments under drastic changes in scenery and conditions. Traditional neural networks only learn during the training phase, while liquid neural networks have parameters that can change over time. This makes them interpretable and resilient to unexpected or noisy data. 

In a series of quadrotor closed-loop control experiments, MIT CSAIL’s drones underwent range tests, stress tests, target rotation and occlusion, hiking with adversaries, triangular loops between objects and dynamic target tracking. The drones were able to track moving targets and executed multi-step loops between objects in entirely new environments. 

MIT CSAIL’s team hopes that the drones’ ability to learn from limited expert data and understand a given task while generalizing to new environments could make autonomous drone deployment more efficient, cost-effective and reliable. Liquid neural networks could also enable autonomous air mobility drones to be as environmental monitors, package deliverers, autonomous vehicles and robotic assistants. 

The research was published in Science Robotics. MIT CSAIL Research Affiliate Ramin Hasani and Ph.D. student Makram Chahine; Patrick Kao ’22, MEng ’22; and Ph.D. student Aaron Ray SM ’21 wrote the paper with Ryan Shubert ’20, MEng ’22; MIT postdocs Mathias Lechner and Alexander Amini; and Rus. The research was partially funded by Schmidt Futures, the U.S. Air Force Research Laboratory, the U.S. Air Force Artificial Intelligence Accelerator, and the Boeing Co. 

Credit: Source link

Comments are closed.