Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot

This study introduces a novel convolutional neural network (CNN) architecture, encompassing both single and multi-head designs, developed to identify a user’s locomotion activity while using a wearable lower limb robot. Our research involved 500 healthy adult participants in an activities of daily l...

Full description

Bibliographic Details
Main Authors: Chang-Sik Son, Won-Seok Kang
Format: Article
Language:English
Published: MDPI AG 2023-09-01
Series:Bioengineering
Subjects:
Online Access:https://www.mdpi.com/2306-5354/10/9/1082
_version_ 1797581217317519360
author Chang-Sik Son
Won-Seok Kang
author_facet Chang-Sik Son
Won-Seok Kang
author_sort Chang-Sik Son
collection DOAJ
description This study introduces a novel convolutional neural network (CNN) architecture, encompassing both single and multi-head designs, developed to identify a user’s locomotion activity while using a wearable lower limb robot. Our research involved 500 healthy adult participants in an activities of daily living (ADL) space, conducted from 1 September to 30 November 2022. We collected prospective data to identify five locomotion activities (level ground walking, stair ascent/descent, and ramp ascent/descent) across three terrains: flat ground, staircase, and ramp. To evaluate the predictive capabilities of the proposed CNN architectures, we compared its performance with three other models: one CNN and two hybrid models (CNN-LSTM and LSTM-CNN). Experiments were conducted using multivariate signals of various types obtained from electromyograms (EMGs) and the wearable robot. Our results reveal that the deeper CNN architecture significantly surpasses the performance of the three competing models. The proposed model, leveraging encoder data such as hip angles and velocities, along with postural signals such as roll, pitch, and yaw from the wearable lower limb robot, achieved superior performance with an inference speed of 1.14 s. Specifically, the F-measure performance of the proposed model reached 96.17%, compared to 90.68% for DDLMI, 94.41% for DeepConvLSTM, and 95.57% for LSTM-CNN, respectively.
first_indexed 2024-03-10T23:01:10Z
format Article
id doaj.art-2e052d6c04e1472fa2e40f05f7e04421
institution Directory Open Access Journal
issn 2306-5354
language English
last_indexed 2024-03-10T23:01:10Z
publishDate 2023-09-01
publisher MDPI AG
record_format Article
series Bioengineering
spelling doaj.art-2e052d6c04e1472fa2e40f05f7e044212023-11-19T09:37:24ZengMDPI AGBioengineering2306-53542023-09-01109108210.3390/bioengineering10091082Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton RobotChang-Sik Son0Won-Seok Kang1Division of Intelligent Robot, Daegu Gyeongbuk Institute of Science & Technology (DGIST), Daegu 42988, Republic of KoreaDivision of Intelligent Robot, Daegu Gyeongbuk Institute of Science & Technology (DGIST), Daegu 42988, Republic of KoreaThis study introduces a novel convolutional neural network (CNN) architecture, encompassing both single and multi-head designs, developed to identify a user’s locomotion activity while using a wearable lower limb robot. Our research involved 500 healthy adult participants in an activities of daily living (ADL) space, conducted from 1 September to 30 November 2022. We collected prospective data to identify five locomotion activities (level ground walking, stair ascent/descent, and ramp ascent/descent) across three terrains: flat ground, staircase, and ramp. To evaluate the predictive capabilities of the proposed CNN architectures, we compared its performance with three other models: one CNN and two hybrid models (CNN-LSTM and LSTM-CNN). Experiments were conducted using multivariate signals of various types obtained from electromyograms (EMGs) and the wearable robot. Our results reveal that the deeper CNN architecture significantly surpasses the performance of the three competing models. The proposed model, leveraging encoder data such as hip angles and velocities, along with postural signals such as roll, pitch, and yaw from the wearable lower limb robot, achieved superior performance with an inference speed of 1.14 s. Specifically, the F-measure performance of the proposed model reached 96.17%, compared to 90.68% for DDLMI, 94.41% for DeepConvLSTM, and 95.57% for LSTM-CNN, respectively.https://www.mdpi.com/2306-5354/10/9/1082human activity recognitionwearable robotsingle-head CNNmulti-head CNNhyperparameter optimizationtime series classification
spellingShingle Chang-Sik Son
Won-Seok Kang
Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
Bioengineering
human activity recognition
wearable robot
single-head CNN
multi-head CNN
hyperparameter optimization
time series classification
title Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_full Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_fullStr Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_full_unstemmed Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_short Multivariate CNN Model for Human Locomotion Activity Recognition with a Wearable Exoskeleton Robot
title_sort multivariate cnn model for human locomotion activity recognition with a wearable exoskeleton robot
topic human activity recognition
wearable robot
single-head CNN
multi-head CNN
hyperparameter optimization
time series classification
url https://www.mdpi.com/2306-5354/10/9/1082
work_keys_str_mv AT changsikson multivariatecnnmodelforhumanlocomotionactivityrecognitionwithawearableexoskeletonrobot
AT wonseokkang multivariatecnnmodelforhumanlocomotionactivityrecognitionwithawearableexoskeletonrobot