Upper-Limb Position-Robust Motion Recognition With Unsupervised Domain Adaptation

Upper-limb position is one of the most critical factors that degrade sEMG-based motion recognition accuracy. Therefore, we propose an upper-limb position-robust motion recognition with unsupervised domain adaptation. The proposed method finds the feature representation which reduces the difference b...

Full description

Bibliographic Details
Main Authors: Sejin Kim, Wan Kyun Chung
Format: Article
Language:English
Published: IEEE 2024-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10458929/
Description
Summary:Upper-limb position is one of the most critical factors that degrade sEMG-based motion recognition accuracy. Therefore, we propose an upper-limb position-robust motion recognition with unsupervised domain adaptation. The proposed method finds the feature representation which reduces the difference between the distributions of labeled and unlabeled data acquired from the specific upper-limb position and other conditions respectively. It is shown that the proposed method has enhanced the classification accuracy by up to 13.50% compared to machine learning-based classifiers, convolutional neural network (CNN), and domain adversarial neural network (DANN). Especially, the lowest classification accuracy among every pair of training and test upper-limb positions has been improved by up to 19.96%. The effectiveness of the proposed method is also verified with the visualization of feature representation and comparison of a learning curve. To see the feasibility and performance in the real-world applications, we designed the virtual interface to control the ball in accordance with the real-time network estimation and conducted the following experiment: With the current ball position given, the subject has to reach the goal position as soon as possible while the error between desired and real trajectory is minimized at the same time. The proposed method has shown significantly higher reaching speed and tracking accuracy.
ISSN:2169-3536