Meta AI’s New Study Trains Robots To Induce Human Capabilities Like Touch To Perceive, Understand, And Interact With The Real World

Source: https://ai.facebook.com/blog/teaching-robots-to-perceive-understand-and-interact-through-touch

Humans can recognize objects even if they are blindfolded because of other enhanced senses such as touch. Touch is a central sensing modality through which one gets to experience the world. Bringing similar capabilities to robots would be a significant breakthrough in the field of Artificial Intelligence. It is essential to nurture an environment for Tactile Sensing in robotics to make it possible for robots to perceive, learn from, and interact with the world. 

Tactile sensing is an instrumental modality of robotic manipulation. It yields details that are not accessible via remote sensors such as cameras or lidars. Touch is crucial in unstructured ecosystems, where the robot’s internal representation of manipulated objects is unreliable.

A study by Facebook AI reveals the basic framework for the touch-sensing ecosystem required for establishing AI systems that can comprehend and interact through touch. Aspects such as hardware, simulators, libraries, benchmarks, and data sets are considered the backbone of such an environment. They enable us to train robots and conclude vital information from touch, which can further integrate readily available information to perform tasks of greater complexity and superior functionality.

DELVING DEEPER ABOUT THE PILLARS OF TOUCH-SENSING ENVIRONMENT:

Before using data for learning from touch, it is necessary to enable data processing utilizing the sensors. Typically, the touch-sensing hardware should replicate a human finger. One such aspect is the compactness of the right sensors for the robot fingertip achieved by advanced miniaturization techniques. However, they are expensive to produce and beyond the reach of a majority of the academic research community. The sensors should be competent enough to withstand the wear and tear caused by repeated contact with multiple surfaces. High-resolution sensors need to be used to measure rich information about properties perceptible by touch, such as surface features and contact forces. A complete open-source design of DIGIT was released in the year 2020 for robotic in-hand manipulation.

Considering the difficulty in researchers manufacturing their cameras or buying expensive professional cameras, GelSight, an MIT spin-off in partnership with Meta AI, has agreed to commercially manufacture DIGIT to make it simpler for researchers to perform touch-sensing analysis.

Along with DIGIT, into the bargain is ReSkin, an open-source touch-sensing “skin” with a low form factor, and can assist robots in learning high-frequency tactile sensing over larger surfaces. ReSkin was developed by Meta AI researchers in collaboration with Carnegie Mellon University.

Simulators are critical in prototyping, debugging, and benchmarking recent advances, which can be used to test and validate hypotheses without the need to conduct experiments that would prove time-consuming.

To support ML research even in the absence of hardware, the researchers developed and open-sourced TACTO, a high-resolution simulator, to ensure quicker experimentation. TACTO can provide high-resolution touch readings at hundreds of frames per second. It can also easily be integrated with various vision-based tactile sensors, including DIGIT.

  • At the hardware level, there are multiple benchmarks and data sets that can assess design choices in sensors.
  • At the perceptual level, benchmarks can compare and contrast various ML models for efficiency in a wide range of touch-sensing use cases.
  • At the robot control level, it is feasible to benchmark the advantages of touch in active control tasks like in-hand manipulation, in simulation, and the real world alike.

TOUCH PROCESSING:

Touch sensors like DIGIT render high-dimensional and rich touch-sensing data that is complex to process with conventional analytical approaches. To deal with this challenge, Machine Learning models are employed. ML models can significantly simplify the design and execution of models that can convert raw sensor readings into high-level assets.

To increase code reuse and reduce deployment time, the researchers created a library of ML models and functionality for touch sensing called PyTouch. The current version provides minimal and basic functionalities like detecting touch, slip, and estimating object pose. The goal is to ultimately integrate PyTouch with real-world sensors and tactile-sensing simulators to enable quick validation of models. Through PyTouch, the researchers thrive on broadening the research community to leverage touch in their applications.

To train PyTouch models for simulation and employ them on real datasets to collect data sets in the least time possible, the researchers are working on Sim2Real transfer. The idea is to explore Sim2Real methods to fine-tune the simulator from real-world data.

The road ahead includes:

  • Hardware improvements that enable sensors to detect the temperature of touched objects.
  • A clearer picture of the correct ML computational structures for processing touch
  • Standardized hardware and software
  • Widely accepted benchmarks
  • Convincing demonstrations of previously unrealizable tasks made possible by touch sensing.

All the above improvements can unlock possibilities in AR/VR and also lead to new findings in industrial, medical, and agricultural robotics. The entire community is working towards a future where every single robot would be equipped with touch-sensing capacities.

References:

  • https://ai.facebook.com/blog/teaching-robots-to-perceive-understand-and-interact-through-touch

Credit: Source link

Comments are closed.