A team of researchers led by Zhiyi Yu of Sun Yat-sen University has developed a new hand gesture recognition algorithm that is complex, accurate, and applicable.
Hand gestures are increasingly being adopted for human-computer interactions, and recent advancements in camera systems, image analysis, and machine learning have greatly improved optical-based gesture recognition. With that said, current methods face many challenges due to limitations in high computational complexity, low speed, poor accuracy, and low number of recognizable gestures.
The new algorithm developed by the team attempts to overcome these limitations, and it was detailed in a paper published in the Journal of Electronic Imaging. One of the main goals of the team was to create an algorithm that not only overcomes these challenges, but can also be easily applied in consumer-level devices.
Adaptability to Different Hand Types
One of the most impressive aspects of the algorithm is its adaptability to different hand types. It first attempts to classify the hand type of the user as either slim, normal, or broad. It does this based on three measurements accounting for relationships between palm width, palm length, and finger length.
Following a successful classification, the hand gesture recognition process compares the input gesture with stored samples of the same hand type.
“Traditional simple algorithms tend to suffer from low recognition rates because they cannot cope with different hand types. By first classifying the input gesture by hand type and then using sample libraries that match this type, we can improve the overall recognition rate with almost negligible resource consumption,” says Yu.
The Prerecognition Step
The team’s method also relies on the use of a “shortcut feature” to perform a prerecognition step. The recognition algorithm is able to identify an input gesture of nine possible gestures, but it is extremely time consuming to compare all the features of the input gesture with those of the stored samples for all possible gestures.
To overcome this, the algorithm’s prerecognition step calculates a ratio of the area of the hand to select the three most likely gestures of the possible nine. This brings the number of candidate gestures to three, and the final gesture is decided by amore complex and high-precision feature extraction based on “Hu invariant moments.”
“The gesture prerecognition step not only reduces the number of calculations and hardware resources required but also improves recognition speed without compromising accuracy,” Yu says.
The algorithm was tested in a commercial PC processor and an FPGA platform using an USB camera. The team called on 40 volunteers to make the nine hand gestures multiple times, and 40 more were used to determine the accuracy of the system.
The system demonstrated that it could recognize hand gestures in real time with an accuracy rate of over 93%. This was the case even when the input gesture images were rotated, translated, or scaled.
The researchers say that they will now look to focus on improving the performance of the algorithm under different lighting conditions, as well as increase the number of possible gestures.
Credit: Source link
Comments are closed.