Deep Neural Network-Based Video Processing to Obtain Dual-Task Upper-Extremity Motor Performance Toward Assessment of Cognitive and Motor Function
Dementia is an increasing global health challenge. Motoric Cognitive Risk Syndrome (MCR) is a predementia stage that can be used to predict future occurrence of dementia. Traditionally, gait speed and subjective memory complaints are used to identify older adults with MCR. Our previous studies indic...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2023-01-01
|
Series: | IEEE Transactions on Neural Systems and Rehabilitation Engineering |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9978749/ |
_version_ | 1797805121167425536 |
---|---|
author | Zilong Liu Changhong Wang Guanzheng Liu Bijan Najafi |
author_facet | Zilong Liu Changhong Wang Guanzheng Liu Bijan Najafi |
author_sort | Zilong Liu |
collection | DOAJ |
description | Dementia is an increasing global health challenge. Motoric Cognitive Risk Syndrome (MCR) is a predementia stage that can be used to predict future occurrence of dementia. Traditionally, gait speed and subjective memory complaints are used to identify older adults with MCR. Our previous studies indicated that dual-task upper-extremity motor performance (DTUEMP) quantified by a single wrist-worn sensor was correlated with both motor and cognitive function. Therefore, the DTUEMP had a potential to be used in the diagnosis of MCR. Instead of using inertial sensors to capture kinematic data of upper-extremity movements, here we proposed a deep neural network-based video processing model to obtain DTUEMP metrics from a 20-second repetitive elbow flexion-extension test under dual-task condition. In details, we used a deep residual neural network to obtain joint coordinate set of the elbow and wrist in each frame, and then used optical flow method to correct the joint coordinates generated by the neural network. The coordinate sets of all frames in a video recording were used to generate an angle sequence which represents rotation angle of the line between the wrist and elbow. Then, the DTUEMP metrics (the mean and SD of flexion and extension phase) were derived from angle sequences. Multi-task learning (MTL) was used to assess cognitive and motor function represented by MMSE and TUG scores based on DTUEMP metrics, with single-task learning (STL) linear model as a benchmark. The results showed a good agreement (r <inline-formula> <tex-math notation="LaTeX">$\ge0.80$ </tex-math></inline-formula> and ICC <inline-formula> <tex-math notation="LaTeX">$\ge0.58$ </tex-math></inline-formula>) between the derived DTUEMP metrics from our proposed model and the ones from clinically validated sensor processing model. We also found that there were correlations with statistical significance (p < 0.05) between some of video-derived DTUEMP metrics (i.e. the mean of flexion time and extension time) and clinical cognitive scale (Mini-Mental State Examination, MMSE). Additionally, some of video-derived DTUEMP metrics (i.e. the mean and standard deviation of flexion time and extension time) were also associated with the scores of timed-up and go (TUG) which is a gold standard to measure functional mobility. Mean absolute percentage error (MAPE) of MTL surpassed that of STL (For MMSE, MTL: 18.63%, STL: 23.18%. For TUG, MTL: 17.88%, STL: 22.53%). The experiments with different light conditions and shot angles verified the robustness of our proposed video processing model to extract DTUEMP metrics in potentially various home environments (r <inline-formula> <tex-math notation="LaTeX">$\ge0.58$ </tex-math></inline-formula> and ICC <inline-formula> <tex-math notation="LaTeX">$\ge0.71$ </tex-math></inline-formula>). This study shows possibility of replacing sensor processing model with video processing model for analyzing the DTUEMP and a promising future to adjuvant diagnosis of MCR via a mobile platform. |
first_indexed | 2024-03-13T05:46:15Z |
format | Article |
id | doaj.art-e30c607ee3d44b4a8d45e40dd54fa89a |
institution | Directory Open Access Journal |
issn | 1558-0210 |
language | English |
last_indexed | 2024-03-13T05:46:15Z |
publishDate | 2023-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Transactions on Neural Systems and Rehabilitation Engineering |
spelling | doaj.art-e30c607ee3d44b4a8d45e40dd54fa89a2023-06-13T20:10:11ZengIEEEIEEE Transactions on Neural Systems and Rehabilitation Engineering1558-02102023-01-013157458010.1109/TNSRE.2022.32280739978749Deep Neural Network-Based Video Processing to Obtain Dual-Task Upper-Extremity Motor Performance Toward Assessment of Cognitive and Motor FunctionZilong Liu0https://orcid.org/0000-0002-4827-3484Changhong Wang1https://orcid.org/0000-0003-2821-9357Guanzheng Liu2https://orcid.org/0000-0002-1208-7479Bijan Najafi3https://orcid.org/0000-0002-0320-8101School of Biomedical Engineering, Sun Yat-Sen University, Shenzhen Campus, Shenzhen, ChinaSchool of Biomedical Engineering, Sun Yat-Sen University, Shenzhen Campus, Shenzhen, ChinaSchool of Biomedical Engineering, Sun Yat-Sen University, Shenzhen Campus, Shenzhen, ChinaMichael E. DeBakey Department of Surgery, Baylor College of Medicine, Houston, TX, USADementia is an increasing global health challenge. Motoric Cognitive Risk Syndrome (MCR) is a predementia stage that can be used to predict future occurrence of dementia. Traditionally, gait speed and subjective memory complaints are used to identify older adults with MCR. Our previous studies indicated that dual-task upper-extremity motor performance (DTUEMP) quantified by a single wrist-worn sensor was correlated with both motor and cognitive function. Therefore, the DTUEMP had a potential to be used in the diagnosis of MCR. Instead of using inertial sensors to capture kinematic data of upper-extremity movements, here we proposed a deep neural network-based video processing model to obtain DTUEMP metrics from a 20-second repetitive elbow flexion-extension test under dual-task condition. In details, we used a deep residual neural network to obtain joint coordinate set of the elbow and wrist in each frame, and then used optical flow method to correct the joint coordinates generated by the neural network. The coordinate sets of all frames in a video recording were used to generate an angle sequence which represents rotation angle of the line between the wrist and elbow. Then, the DTUEMP metrics (the mean and SD of flexion and extension phase) were derived from angle sequences. Multi-task learning (MTL) was used to assess cognitive and motor function represented by MMSE and TUG scores based on DTUEMP metrics, with single-task learning (STL) linear model as a benchmark. The results showed a good agreement (r <inline-formula> <tex-math notation="LaTeX">$\ge0.80$ </tex-math></inline-formula> and ICC <inline-formula> <tex-math notation="LaTeX">$\ge0.58$ </tex-math></inline-formula>) between the derived DTUEMP metrics from our proposed model and the ones from clinically validated sensor processing model. We also found that there were correlations with statistical significance (p < 0.05) between some of video-derived DTUEMP metrics (i.e. the mean of flexion time and extension time) and clinical cognitive scale (Mini-Mental State Examination, MMSE). Additionally, some of video-derived DTUEMP metrics (i.e. the mean and standard deviation of flexion time and extension time) were also associated with the scores of timed-up and go (TUG) which is a gold standard to measure functional mobility. Mean absolute percentage error (MAPE) of MTL surpassed that of STL (For MMSE, MTL: 18.63%, STL: 23.18%. For TUG, MTL: 17.88%, STL: 22.53%). The experiments with different light conditions and shot angles verified the robustness of our proposed video processing model to extract DTUEMP metrics in potentially various home environments (r <inline-formula> <tex-math notation="LaTeX">$\ge0.58$ </tex-math></inline-formula> and ICC <inline-formula> <tex-math notation="LaTeX">$\ge0.71$ </tex-math></inline-formula>). This study shows possibility of replacing sensor processing model with video processing model for analyzing the DTUEMP and a promising future to adjuvant diagnosis of MCR via a mobile platform.https://ieeexplore.ieee.org/document/9978749/Dementiamotoric cognitive risk syndrometelehealthtele-medicinedeep residual neural networkmobile health |
spellingShingle | Zilong Liu Changhong Wang Guanzheng Liu Bijan Najafi Deep Neural Network-Based Video Processing to Obtain Dual-Task Upper-Extremity Motor Performance Toward Assessment of Cognitive and Motor Function IEEE Transactions on Neural Systems and Rehabilitation Engineering Dementia motoric cognitive risk syndrome telehealth tele-medicine deep residual neural network mobile health |
title | Deep Neural Network-Based Video Processing to Obtain Dual-Task Upper-Extremity Motor Performance Toward Assessment of Cognitive and Motor Function |
title_full | Deep Neural Network-Based Video Processing to Obtain Dual-Task Upper-Extremity Motor Performance Toward Assessment of Cognitive and Motor Function |
title_fullStr | Deep Neural Network-Based Video Processing to Obtain Dual-Task Upper-Extremity Motor Performance Toward Assessment of Cognitive and Motor Function |
title_full_unstemmed | Deep Neural Network-Based Video Processing to Obtain Dual-Task Upper-Extremity Motor Performance Toward Assessment of Cognitive and Motor Function |
title_short | Deep Neural Network-Based Video Processing to Obtain Dual-Task Upper-Extremity Motor Performance Toward Assessment of Cognitive and Motor Function |
title_sort | deep neural network based video processing to obtain dual task upper extremity motor performance toward assessment of cognitive and motor function |
topic | Dementia motoric cognitive risk syndrome telehealth tele-medicine deep residual neural network mobile health |
url | https://ieeexplore.ieee.org/document/9978749/ |
work_keys_str_mv | AT zilongliu deepneuralnetworkbasedvideoprocessingtoobtaindualtaskupperextremitymotorperformancetowardassessmentofcognitiveandmotorfunction AT changhongwang deepneuralnetworkbasedvideoprocessingtoobtaindualtaskupperextremitymotorperformancetowardassessmentofcognitiveandmotorfunction AT guanzhengliu deepneuralnetworkbasedvideoprocessingtoobtaindualtaskupperextremitymotorperformancetowardassessmentofcognitiveandmotorfunction AT bijannajafi deepneuralnetworkbasedvideoprocessingtoobtaindualtaskupperextremitymotorperformancetowardassessmentofcognitiveandmotorfunction |