Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture

To obtain accurate position information, herein, a one-assistant method involving the fusion of extreme learning machine (ELM)/finite impulse response (FIR) filters and vision data is proposed for inertial navigation system (INS)-based human motion capture. In the proposed method, when vision is ava...

Full description

Bibliographic Details
Main Authors: Yuan Xu, Rui Gao, Ahong Yang, Kun Liang, Zhongwei Shi, Mingxu Sun, Tao Shen
Format: Article
Language:English
Published: MDPI AG 2023-11-01
Series:Micromachines
Subjects:
Online Access:https://www.mdpi.com/2072-666X/14/11/2088
_version_ 1797458397007708160
author Yuan Xu
Rui Gao
Ahong Yang
Kun Liang
Zhongwei Shi
Mingxu Sun
Tao Shen
author_facet Yuan Xu
Rui Gao
Ahong Yang
Kun Liang
Zhongwei Shi
Mingxu Sun
Tao Shen
author_sort Yuan Xu
collection DOAJ
description To obtain accurate position information, herein, a one-assistant method involving the fusion of extreme learning machine (ELM)/finite impulse response (FIR) filters and vision data is proposed for inertial navigation system (INS)-based human motion capture. In the proposed method, when vision is available, the vision-based human position is considered as input to an FIR filter that accurately outputs the human position. Meanwhile, another FIR filter outputs the human position using INS data. ELM is used to build mapping between the output of the FIR filter and the corresponding error. When vision data are unavailable, FIR is used to provide the human posture and ELM is used to provide its estimation error built in the abovementioned stage. In the right-arm elbow, the proposed method can improve the cumulative distribution functions (CDFs) of the position errors by about 12.71%, which shows the effectiveness of the proposed method.
first_indexed 2024-03-09T16:36:31Z
format Article
id doaj.art-ba5be2d2935b4add9c5916dfd4d54194
institution Directory Open Access Journal
issn 2072-666X
language English
last_indexed 2024-03-09T16:36:31Z
publishDate 2023-11-01
publisher MDPI AG
record_format Article
series Micromachines
spelling doaj.art-ba5be2d2935b4add9c5916dfd4d541942023-11-24T14:56:31ZengMDPI AGMicromachines2072-666X2023-11-011411208810.3390/mi14112088Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion CaptureYuan Xu0Rui Gao1Ahong Yang2Kun Liang3Zhongwei Shi4Mingxu Sun5Tao Shen6School of Electrical Engineering, University of Jinan, Jinan 250022, ChinaSchool of Electrical Engineering, University of Jinan, Jinan 250022, ChinaSchool of Music, University of Jinan, Jinan 250022, ChinaSchool of Electrical Engineering, University of Jinan, Jinan 250022, ChinaSchool of Electrical Engineering, University of Jinan, Jinan 250022, ChinaSchool of Electrical Engineering, University of Jinan, Jinan 250022, ChinaSchool of Electrical Engineering, University of Jinan, Jinan 250022, ChinaTo obtain accurate position information, herein, a one-assistant method involving the fusion of extreme learning machine (ELM)/finite impulse response (FIR) filters and vision data is proposed for inertial navigation system (INS)-based human motion capture. In the proposed method, when vision is available, the vision-based human position is considered as input to an FIR filter that accurately outputs the human position. Meanwhile, another FIR filter outputs the human position using INS data. ELM is used to build mapping between the output of the FIR filter and the corresponding error. When vision data are unavailable, FIR is used to provide the human posture and ELM is used to provide its estimation error built in the abovementioned stage. In the right-arm elbow, the proposed method can improve the cumulative distribution functions (CDFs) of the position errors by about 12.71%, which shows the effectiveness of the proposed method.https://www.mdpi.com/2072-666X/14/11/2088INSvisionELMFIRhuman position
spellingShingle Yuan Xu
Rui Gao
Ahong Yang
Kun Liang
Zhongwei Shi
Mingxu Sun
Tao Shen
Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture
Micromachines
INS
vision
ELM
FIR
human position
title Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture
title_full Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture
title_fullStr Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture
title_full_unstemmed Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture
title_short Extreme Learning Machine/Finite Impulse Response Filter and Vision Data-Assisted Inertial Navigation System-Based Human Motion Capture
title_sort extreme learning machine finite impulse response filter and vision data assisted inertial navigation system based human motion capture
topic INS
vision
ELM
FIR
human position
url https://www.mdpi.com/2072-666X/14/11/2088
work_keys_str_mv AT yuanxu extremelearningmachinefiniteimpulseresponsefilterandvisiondataassistedinertialnavigationsystembasedhumanmotioncapture
AT ruigao extremelearningmachinefiniteimpulseresponsefilterandvisiondataassistedinertialnavigationsystembasedhumanmotioncapture
AT ahongyang extremelearningmachinefiniteimpulseresponsefilterandvisiondataassistedinertialnavigationsystembasedhumanmotioncapture
AT kunliang extremelearningmachinefiniteimpulseresponsefilterandvisiondataassistedinertialnavigationsystembasedhumanmotioncapture
AT zhongweishi extremelearningmachinefiniteimpulseresponsefilterandvisiondataassistedinertialnavigationsystembasedhumanmotioncapture
AT mingxusun extremelearningmachinefiniteimpulseresponsefilterandvisiondataassistedinertialnavigationsystembasedhumanmotioncapture
AT taoshen extremelearningmachinefiniteimpulseresponsefilterandvisiondataassistedinertialnavigationsystembasedhumanmotioncapture