A team of researchers at the University of California – San Diego has developed a new system of algorithms that allows four-legged robots to walk and run in the wild. The robots can navigate challenging and complex terrain while avoiding static and moving obstacles.
The team carried out tests where a robot was guided by the system to maneuver autonomously and quickly across sandy surfaces, gravel, grass, and bumpy dirt hills covered with branches and fallen leaves. At the same time, it could avoid bumping into poles, trees, shrubs, boulders, benches, and people. The robot also demonstrated an ability to navigate a busy office space without bumping into various obstacles.
Building Efficient Legged Robots
The new system means researchers are closer than ever to building efficient robots for search and rescue missions, or robots for collecting information in spaces that are hard to reach or dangerous for humans.
The work is set to be presented at the 2022 International Conference on Intelligent Robots and Systems (IROS) from October 23 to 27 in Kyoto, Japan.
The system gives the robot more versatility due to its combination of the robot’s sense of sight with proprioception, which is another sensing modality that involves the robot’s sense of movement, direction, speed, location and touch.
Most of the current approaches to train legged robots to walk and navigate use either proprioception or vision. However, they both are not used at the same time.
Combining Proprioception With Computer Vision
Xiaolong Wang is a professor of electrical and computer engineering at the UC San Diego Jacobs School of Engineering.
“In one case, it’s like training a blind robot to walk by just touching and feeling the ground. And in the other, the robot plans its leg movements based on sight alone. It is not learning two things at the same time,” said Wang. “In our work, we combine proprioception with computer vision to enable a legged robot to move around efficiently and smoothly — while avoiding obstacles — in a variety of challenging environments, not just well-defined ones.”
The system developed by the team relies on a special set of algorithms to fuse data from real-time images, which were taken by a depth camera on the robot’s head, with data coming from sensors on the robot’s legs.
However, Wang said that this was a complex task.
“The problem is that during real-world operation, there is sometimes a slight delay in receiving images from the camera so the data from the two different sensing modalities do not always arrive at the same time,” he explained.
The team addressed this challenge by simulating the mismatch by randomizing the two sets of inputs. The researchers refer to this technique as multi-modal delay randomization, and they then used the used and randomized inputs to train a reinforcement learning policy. The approach enabled the robot to make decisions quickly while it was navigating, as well as anticipate changes in its environment. These abilities allowed the robot to move and maneuver obstacles faster on different types of terrains, all without assistance from a human operator.
The team will now look to make legged robots more versatile so they can operate on even more complex terrains.
“Right now, we can train a robot to do simple motions like walking, running and avoiding obstacles,” Wang said. “Our next goals are to enable a robot to walk up and down stairs, walk on stones, change directions and jump over obstacles.”
Credit: Source link
Comments are closed.