Hand gesture system enhances human-computer interaction
Human-computer interaction is demonstrated in devices such as touchscreens, computer mouses, keyboards, and remote controls. Researchers have been working to enhance such interaction via more interpersonal communications using machines. Voice-assisted technology is one way, and now, a team in China is delving into physical movement as another approach.
Researchers at Sun Yat-sen University (Guangzhou, China) have developed a hand-gesture recognition algorithm that could be integrated in “consumer-level devices” to provide human-computer interaction in a “more natural and intuitive noncontact manner.” The team notes that this finding addresses issues of complexity, accuracy, and applicability, which have all presented challenging limitations in existing interaction methods.
“Traditional simple algorithms tend to suffer from low recognition rates because they cannot cope with different hand types,” says lead researcher Zhiyi Yu, an associate professor at Sun Yat-sen. “By first classifying the input gesture by hand type and then using sample libraries that match this type, we can improve the overall recognition rate with almost negligible resource consumption.”
According to the study, published in the Journal of Electronic Imaging, “hand gestures are an important part of human language, and hence, the development of hand gesture recognition affects the nature and flexibility of human-computer interaction.”
The new hand-adaptive algorithm, which is “trained” using self-collected data, can be adapted to different hand types, unlike existing attempts that can identify only a low number of recognizable gestures. The algorithm works by classifying the user’s hand type (“normal,” slim, or broad) as well as the length and width of the palm and fingers. It does not directly recognize the input gesture images, the study notes, but first classifies them by hand type “and then uses different sample libraries for recognition according to different hand types.” This improves the overall recognition rate with almost negligible resource consumption.
According to the researchers, this method allows a pre-recognition (shortcut) step that calculates a ratio of the area of the hand to select the three most likely gestures of a possible nine (see figure).
“The gesture pre-recognition step not only reduces the number of calculations and hardware resources required, but also improves recognition speed without compromising accuracy,” Yu says, noting that narrowing this down ultimately determines a final gesture “using a much more complex and high-precision feature extraction based on Hu invariant moments”—the Hu variant is defined as a set of seven numbers calculated using central moments that are invariant to image transformations.
The researchers note in their study that “the results of [these] experiments demonstrate that the proposed algorithm could accurately recognize gestures in real time and exhibits good adaptability to different hand types.” The algorithm touts a more than 94% recognition rate; this rate exceeds 93% “when hand gesture images are rotated, translated, or scaled.”
The researchers’ next steps will be “improving the performance of the algorithm under poor lighting conditions and increasing the number of possible gestures.”
Justine Murphy | Multimedia Director, Digital Infrastructure
Justine Murphy is the multimedia director for Endeavor Business Media's Digital Infrastructure Group. She is a multiple award-winning writer and editor with more 20 years of experience in newspaper publishing as well as public relations, marketing, and communications. For nearly 10 years, she has covered all facets of the optics and photonics industry as an editor, writer, web news anchor, and podcast host for an internationally reaching magazine publishing company. Her work has earned accolades from the New England Press Association as well as the SIIA/Jesse H. Neal Awards. She received a B.A. from the Massachusetts College of Liberal Arts.