A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control
Robotic assistive or rehabilitative devices are promising aids for people with neurological disorders as they help regain normative functions for both upper and lower limbs. However, it remains challenging to accurately estimate human intent or residual efforts non-invasively when using these roboti...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
Cambridge University Press
2022-01-01
|
Series: | Wearable Technologies |
Subjects: | |
Online Access: | https://www.cambridge.org/core/product/identifier/S2631717622000184/type/journal_article |
_version_ | 1811155667237470208 |
---|---|
author | Qiang Zhang Natalie Fragnito Xuefeng Bao Nitin Sharma |
author_facet | Qiang Zhang Natalie Fragnito Xuefeng Bao Nitin Sharma |
author_sort | Qiang Zhang |
collection | DOAJ |
description | Robotic assistive or rehabilitative devices are promising aids for people with neurological disorders as they help regain normative functions for both upper and lower limbs. However, it remains challenging to accurately estimate human intent or residual efforts non-invasively when using these robotic devices. In this article, we propose a deep learning approach that uses a brightness mode, that is, B-mode, of ultrasound (US) imaging from skeletal muscles to predict the ankle joint net plantarflexion moment while walking. The designed structure of customized deep convolutional neural networks (CNNs) guarantees the convergence and robustness of the deep learning approach. We investigated the influence of the US imaging’s region of interest (ROI) on the net plantarflexion moment prediction performance. We also compared the CNN-based moment prediction performance utilizing B-mode US and sEMG spectrum imaging with the same ROI size. Experimental results from eight young participants walking on a treadmill at multiple speeds verified an improved accuracy by using the proposed US imaging + deep learning approach for net joint moment prediction. With the same CNN structure, compared to the prediction performance by using sEMG spectrum imaging, US imaging significantly reduced the normalized prediction root mean square error by 37.55% ($ p $ < .001) and increased the prediction coefficient of determination by 20.13% ($ p $ < .001). The findings show that the US imaging + deep learning approach personalizes the assessment of human joint voluntary effort, which can be incorporated with assistive or rehabilitative devices to improve clinical performance based on the assist-as-needed control strategy. |
first_indexed | 2024-04-10T04:38:23Z |
format | Article |
id | doaj.art-30e994e31da94f7c884d223a1033f366 |
institution | Directory Open Access Journal |
issn | 2631-7176 |
language | English |
last_indexed | 2024-04-10T04:38:23Z |
publishDate | 2022-01-01 |
publisher | Cambridge University Press |
record_format | Article |
series | Wearable Technologies |
spelling | doaj.art-30e994e31da94f7c884d223a1033f3662023-03-09T12:43:51ZengCambridge University PressWearable Technologies2631-71762022-01-01310.1017/wtc.2022.18A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices controlQiang Zhang0https://orcid.org/0000-0002-8806-9672Natalie Fragnito1Xuefeng Bao2https://orcid.org/0000-0003-2453-1474Nitin Sharma3Joint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USAJoint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USABiomedical Engineering Department, University of Wisconsin-Milwaukee, Milwaukee, WI, USAJoint Department of Biomedical Engineering, North Carolina State University, Raleigh, NC, USA Joint Department of Biomedical Engineering, The University of North Carolina at Chapel Hill, Chapel Hill, NC, USARobotic assistive or rehabilitative devices are promising aids for people with neurological disorders as they help regain normative functions for both upper and lower limbs. However, it remains challenging to accurately estimate human intent or residual efforts non-invasively when using these robotic devices. In this article, we propose a deep learning approach that uses a brightness mode, that is, B-mode, of ultrasound (US) imaging from skeletal muscles to predict the ankle joint net plantarflexion moment while walking. The designed structure of customized deep convolutional neural networks (CNNs) guarantees the convergence and robustness of the deep learning approach. We investigated the influence of the US imaging’s region of interest (ROI) on the net plantarflexion moment prediction performance. We also compared the CNN-based moment prediction performance utilizing B-mode US and sEMG spectrum imaging with the same ROI size. Experimental results from eight young participants walking on a treadmill at multiple speeds verified an improved accuracy by using the proposed US imaging + deep learning approach for net joint moment prediction. With the same CNN structure, compared to the prediction performance by using sEMG spectrum imaging, US imaging significantly reduced the normalized prediction root mean square error by 37.55% ($ p $ < .001) and increased the prediction coefficient of determination by 20.13% ($ p $ < .001). The findings show that the US imaging + deep learning approach personalizes the assessment of human joint voluntary effort, which can be incorporated with assistive or rehabilitative devices to improve clinical performance based on the assist-as-needed control strategy.https://www.cambridge.org/core/product/identifier/S2631717622000184/type/journal_articleconvolutional neural networkdeep learninghuman intenthuman walkingplantarflexionultrasound imaging |
spellingShingle | Qiang Zhang Natalie Fragnito Xuefeng Bao Nitin Sharma A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control Wearable Technologies convolutional neural network deep learning human intent human walking plantarflexion ultrasound imaging |
title | A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control |
title_full | A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control |
title_fullStr | A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control |
title_full_unstemmed | A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control |
title_short | A deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging: A framework for assistive devices control |
title_sort | deep learning method to predict ankle joint moment during walking at different speeds with ultrasound imaging a framework for assistive devices control |
topic | convolutional neural network deep learning human intent human walking plantarflexion ultrasound imaging |
url | https://www.cambridge.org/core/product/identifier/S2631717622000184/type/journal_article |
work_keys_str_mv | AT qiangzhang adeeplearningmethodtopredictanklejointmomentduringwalkingatdifferentspeedswithultrasoundimagingaframeworkforassistivedevicescontrol AT nataliefragnito adeeplearningmethodtopredictanklejointmomentduringwalkingatdifferentspeedswithultrasoundimagingaframeworkforassistivedevicescontrol AT xuefengbao adeeplearningmethodtopredictanklejointmomentduringwalkingatdifferentspeedswithultrasoundimagingaframeworkforassistivedevicescontrol AT nitinsharma adeeplearningmethodtopredictanklejointmomentduringwalkingatdifferentspeedswithultrasoundimagingaframeworkforassistivedevicescontrol AT qiangzhang deeplearningmethodtopredictanklejointmomentduringwalkingatdifferentspeedswithultrasoundimagingaframeworkforassistivedevicescontrol AT nataliefragnito deeplearningmethodtopredictanklejointmomentduringwalkingatdifferentspeedswithultrasoundimagingaframeworkforassistivedevicescontrol AT xuefengbao deeplearningmethodtopredictanklejointmomentduringwalkingatdifferentspeedswithultrasoundimagingaframeworkforassistivedevicescontrol AT nitinsharma deeplearningmethodtopredictanklejointmomentduringwalkingatdifferentspeedswithultrasoundimagingaframeworkforassistivedevicescontrol |