Researchers from the University in Yokohama Propose VirSen1.0: A Virtual Environment for Streamlining the Development of Sensor-Based Human Gesture Recognition Systems

Gesture recognition technology faces significant challenges in sensor configuration and placement, data interpretation, and machine learning accuracy. Efficiently setting up sensors to capture nuanced movements, reliably interpreting the resulting data, and ensuring that the machine learning algorithms accurately recognize the intended gestures remain persistent problems. These issues not only hinder optimal performance but also limit the broader adoption of gesture-based systems in various applications

A team of researchers from the University in Yokohama, Japan, have unveiled a new model for computerized human gesture recognition. The work discusses the development of a user interface(UI) called VirSen 1.0, which allows users to interactively arrange virtual optical sensors in a virtual space to design a gesture estimation system. It enables users to experiment with sensor placements and evaluate their impact on gesture recognition without the need for physical sensors.

The data is collected for training by having an avatar perform a desired gesture. The researchers discuss the related work in simulators for sensor management, highlighting the uniqueness of their approach in combining situations, data acquisition, and model creation within a single software tool. Support vector machine(SVM) classifier consisting of radial basis function kernel is used due to the impracticality of collecting a large amount of training data. The study highlights the importance of permutation feature importance(PFI) contribution indicator in identifying sensor placements that result in high recognition rates. PFI measures how individual features impact the model’s prediction by arranging them. PFI provides insights into features, helping optimize sensor placement during the trial-and-error process.

The optical sensor in this research comprises an infrared LED and a photodetector transistor. Data acquisition begins when the sensor values exceed a specific threshold compared to the previous frame. Human gestures are recorded using Xsens, a motion-capturing tool capturing inertial sensors. It has captured six 3D gestures, including squatting, jumping, leaning, and raising hands. The implementation includes a visual representation of the simulator’s interface, allowing users to place objects, gather data, visualize sensor values, and evaluate the accuracy with the PFI contribution indicator.

The research team plans to improve the simulator, including additional functionality to check past placements and results, suggesting sensor placements based on the PFI contribution indicator. In the future, authors plan to address certain limitations, including not considering clothing influence on recognition accuracy, lack of sensor noise and error modeling, processing speed, and restriction on recognition targets.


Check out the Paper and Reference Article. All Credit For This Research Goes To the Researchers on This Project. Also, don’t forget to join our 29k+ ML SubReddit, 40k+ Facebook Community, Discord Channel, and Email Newsletter, where we share the latest AI research news, cool AI projects, and more.

If you like our work, you will love our newsletter..


Astha Kumari is a consulting intern at MarktechPost. She is currently pursuing Dual degree course in the department of chemical engineering from Indian Institute of Technology(IIT), Kharagpur. She is a machine learning and artificial intelligence enthusiast. She is keen in exploring their real life applications in various fields.


🚀 CodiumAI enables busy developers to generate meaningful tests (Sponsored)

Credit: Source link

Comments are closed.