Meet KITE: An AI Framework for Semantic Manipulation Using Keypoints as a Representation for Visual Grounding and Precise Action Inference

With the growing advancement in the field of Artificial Intelligence, AI technology is getting started to combine with robotics. From Computer Vision and Natural Language Processing to Edge computing, AI is getting integrated with robotics to develop meaningful and effective solutions. AI robots are machines that act in the real world. It is important to consider the possibility of language as a means of communication between people and robots. However, two main issues prevent modern robots from efficiently handling free-form language inputs. The first challenge is of enabling a robot to reason about what it needs to manipulate based on the instructions provided. Another is pick-and-place tasks in which careful discernment is needed when picking up objects like teddy animals by their ears as opposed to their legs or soap bottles by their dispensers as opposed to their sides.

Robots must extract scene and object semantics from input instructions and plan accurate low-level actions in accordance to perform semantic manipulation. To overcome these challenges, researchers from Stanford University have introduced KITE (Keypoints + Instructions to Execution), a two-step framework for semantic manipulation. Scene semantics and object semantics are both taken into account in KITE. While object semantics precisely localizes various portions within an object instance, scene semantics involves discriminating between various objects in a visual scene.

KITE’s first phase entails employing 2D picture key points to ground an input instruction in a visual context. For subsequent action inference, this procedure offers a very precise object-centric bias. Robot develops a precise comprehension of the items and their pertinent features by mapping the command to key points in the scene. The second step of KITE involves executing a learned keypoint-conditioned skill based on the RGB-D scene observation. The robot uses these parameterized talents to carry out the provided instruction. Keypoints and parameterized skills work together to provide fine-grained manipulation and generalization to differences in scenes and objects.

[Sponsored] 🔥 Build your personal brand with Taplio  🚀 The 1st all-in-one AI-powered tool to grow on LinkedIn. Create better LinkedIn content 10x faster, schedule, analyze your stats & engage. Try it for free!

For evaluation, the team has assessed KITE’s performance in three actual environments: high-precision coffee-making, semantic grasping, and long-horizon 6-DoF tabletop manipulation. KITE finished the task of preparing coffee with a success rate of 71%, a success rate of 70% for semantic grasping, and a success rate of 75% for instruction-following in the tabletop manipulation scenario. KITE outperformed frameworks that use keypoint-based grounding as opposed to pre-trained visual language models. It performed better than frameworks that emphasize end-to-end visuomotor control over the usage of skills. 

KITE accomplished these results despite having had the same or fewer demonstrations throughout training, demonstrating its effectiveness and efficiency. To map an image and a language phrase to a saliency heatmap and produce a key point, KITE employs a CLIPort-style technique. In order to output skill waypoints, the skilled architecture modifies PointNet++ to accept an input multi-view point cloud annotated with a key point. 2D key points enable KITE to precisely attend to visual features, while 3D point clouds provide the necessary 6DoF context for planning.

In conclusion, the KITE framework presents a promising solution to the longstanding challenge of enabling robots to interpret and follow natural language commands in the context of manipulation. It achieves fine-grained semantic manipulation with high precision and generalization by utilizing the power of key points and instruction grounding.


Check out the Paper and Project. Don’t forget to join our 25k+ ML SubRedditDiscord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more. If you have any questions regarding the above article or if we missed anything, feel free to email us at Asif@marktechpost.com

🚀 Check Out 100’s AI Tools in AI Tools Club


Tanya Malhotra is a final year undergrad from the University of Petroleum & Energy Studies, Dehradun, pursuing BTech in Computer Science Engineering with a specialization in Artificial Intelligence and Machine Learning.
She is a Data Science enthusiast with good analytical and critical thinking, along with an ardent interest in acquiring new skills, leading groups, and managing work in an organized manner.


🔥 StoryBird.ai just dropped some amazing features. Generate an illustrated story from a prompt. Check it out here. (Sponsored)

Credit: Source link

Comments are closed.