Summary: | Nowadays, fields that employ gestures in the execution of certain procedures or actions include the medical field (surgical procedures, recovery, training) and engineering (IT, mechanics, robotics – controlling a robotic arm). Research has been conducted in the medical field (surgeons) to observe their main actions during a procedure. It is vital in this domain to implement gestures in the execution of specific procedures to ease the workload of surgeons. Certain actions that can be replaced by gestures lend themselves to be addressed. An example of interaction between a surgeon and a computer is the selection of an area in an image. The most commonly used device for capturing/processing gestures is the Leap Motion sensor. With its help, dynamic gestures can be created and then used in controlling interfaces or rehabilitating hand mobility. Among the most commonly used gestures in hand mobility rehabilitation are pronation and supination gestures, hand/finger flexion and extension gestures, and hand rotation gestures. Accurately and precisely capturing gestures created for this device is a challenge. Gesture processing can be done either through mathematical formulas, such as distance calculations between fingers and the palm center, hand rotation angles, finger surfaces, and directions. Another method involves the use of neural networks in classifying created gestures. Among the most commonly used classifiers for dynamic gestures are K-neighbors Classifier, Linear Discriminant Analysis, and Decision Tree Classifier. This paper will emphasize the importance of using gestures in the medical field and capturing them with high precision for the Leap Motion device.
|