Researchers at The University of Luxembourg Develop a Method to Learn Grasping Objects on the Moon from 3D Octree Observations with Deep Reinforcement Learning

The goal of planetary exploration is to improve science by revealing new information about the geology and resource potential of other worlds. Extraterrestrial robotic systems are crucial in their acquisition for either in-situ analysis or sample return, which is a significant component of this project’s sample analysis. In order to extract hydrogen and oxygen for the local generation of rocket fuel and the functioning of life support systems, reports of these discoveries are also required for future in-situ resource usage.

This would lessen the need for further resupply flights while also greatly reducing the payload needed for the initial launch from Earth. An increasing amount of work is being put into sample return missions that could deliver this information. Recently, lunar material was brought back to Earth, and NASA has chosen businesses to gather Moon rocks in order to advance the Artemis mission. Another mission idea involves retrieving samples being collected by the NASA rover Perseverance using an ESA rover. This mission is called Mars Sample Return.

Unfortunately, remote teleoperation is ineffective due to transmission lag, which limits the quantity of scientific data that rovers may collect throughout their mission. Therefore, as missions become more sophisticated, interplanetary rovers must have autonomy. Numerous possible uses for rovers with robotic arms exist in alien settings.

These rovers were capable of performing assembly and maintenance work by engaging with various tools and technical equipment in addition to moving scientific instruments to closely examine regions of interest. Many of the subroutines engaged in such tasks demand that an item or tool be securely held before being used. Therefore, flexible mobile manipulation requires a fundamental ability called robotic grasping. Rovers must be able to grip a variety of items that can vary in geometry, appearance, and mechanical characteristics in order to achieve this flexibility.

The goal of vision-based robotic grasping in lunar environments can be accomplished by using end-to-end deep reinforcement learning, according to a recent publication from researchers at the University of Luxembourg. This paper’s main objective is to develop end-to-end strategies for robotic grasping in unstructured lunar settings with varied rock types, uneven terrain, and strong lighting. Due to the high expense and safety demands of robotic space systems, it is not practical to train agents directly in extraterrestrial settings. The team’s solution was to use simulations in order to transfer learned policies to a real robot.

The main contributions of this work are as follows: 

• A simulation of the Moon that, thanks to its realistic physics, physically-based rendering, and extensive use of domain randomization with procedurally-generated datasets for simulating the wide range of lunar conditions, enables the learning of mobile manipulation skills that are transferable to the real-world domain.

 • a novel method for using multi-channel features in 3D octree visual observations for end-to-end deep reinforcement learning. The 3D world is effectively represented using octrees, and abstract features that enable agents to generalize over spatial positions and orientations are extracted using an octree-based convolutional neural network.

• A demonstration of learning robotic grasping inside a true-to-life Moon simulation environment, followed by a zero-shot simulation to real-world transfer inside a real robot factory.

The experimental analysis shows that when used for end-to-end learning of robotic grasping, 3D visual observations in the form of octrees perform better than image-based observations. This outcome is explained by the fact that 3D convolutions generalize over spatial locations and orientations more effectively than 2D convolutions, which are better at generalizing over planar picture coordinates. Another benefit of 3D observations is that they make it easier to transfer learned policies to new systems or application domains by making them invariant to the camera pose.

 Conclusion 

In this paper, researchers from the University of Luxembourg presented a method for learning end-to-end deep reinforcement learning for robotic grasping on the Moon. Researchers examined the use of 3D octree observations and evaluated their effectiveness against that of 2D photos. By showing the zero-shot sim-to-real transfer to a real robot in a Moon-analogue facility, they also looked into the effects of using domain randomization in lunar conditions. Despite its many difficulties, the team believes deep reinforcement learning is a promising technique for teaching robots in space how to manipulate objects. One of the key stages before such technologies can be reliably used for a wide variety of applications in space robots is improving the learning stability in various conditions.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'Learning to Grasp on the Moon from 3D Octree Observations with Deep Reinforcement Learning'. All Credit For This Research Goes To Researchers on This Project. Check out the Preprint/Under review paper,and github link.

Please Don't Forget To Join Our ML Subreddit


Nitish is a computer science undergraduate with keen interest in the field of deep learning. He has done various projects related to deep learning and closely follows the new advancements taking place in the field.


Credit: Source link

Comments are closed.