Listen to this article |
An MIT research team has developed an AI technique that allows robots to manipulate objects with their entire hand or body, instead of just their fingertips.
When a person picks up a box, they typically use their entire hands to lift it, and then their forearms and chest to keep the box steady while they move the box somewhere else. This kind of manipulation is whole-body manipulation, and it’s something that robots struggle with.
For robots, each spot where the box could touch any point of their fingers, arms, and torso is a contact event that the robot has to reason about. This leaves robots with billions of potential contact events, making planning for tasks that require the whole body extremely complicated. This process of a robot trying to learn the best way to move an object is called contact-rich manipulation planning.
However, MIT researchers have found a way to simplify this process using an AI technique called smoothing and an algorithm built by the team. Smoothing summarizes many contact events into a smaller number of decisions, eliminating events that aren’t important to the task and narrowing things down to a smaller number of decisions. This allows even a simple algorithm to quickly devise an effective manipulation plan.
Many robots learn how to handle objects through reinforcement learning, a machine-learning technique where an agent uses trial and error to learn how to complete a task for a reward. Through this kind of learning, a system has to learn everything about the world through trial and error.
With billions of contact points to try out, reinforcement learning can take a great deal of computation, making it a not ideal choice for contact-rich manipulation planning, although it can be effective with enough time.
Reinforcement learning does, however, perform the smoothing process by trying different contact points and computing a weighted average of the results, which is what helps to make it so effective in teaching robots.
The MIT research team drew on this knowledge to build a simple model that performs this kind of research, enabling the system to focus on core robot-object interactions and predict long-term behavior.
The team then combined their model with an algorithm that can rapidly search through all possible decisions a robot can make. Between the smoothing model and algorithm, the team created a system that only needed about a minute of computation time on a standard laptop.
While this project is still in its early stages, this method could be used to allow factories to deploy smaller, mobile robots that use their entire bodies to manipulate objects rather than large robotic arms that only grasp with their fingertips.
While the model showed promising results when tested in simulation, it cannot handle very dynamic motions, like objects falling. This is one of the issues that the team hopes to continue to address in future research.
The teams’ research was funded, in part, by Amazon, MIT Lincoln Laboratory, the National Science Foundation, and the Ocado Group. The team included H.J Terry Suh, an electrical engineering and computer science (EECS) graduate student and co-lead author on the paper are co-lead author Tao Pang Ph.D. ’23, a roboticist at Boston Dynamics AI Institute; Lujie Yang, an EECS graduate student; and senior author Russ Tedrake, the Toyota Professor of EECS, Aeronautics and Astronautics, and Mechanical Engineering, and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Credit: Source link
Comments are closed.