IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion

The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various h...

Full description

Bibliographic Details
Main Authors: Omid Dehzangi, Mojtaba Taherisadr, Raghvendar ChangalVala
Format: Article
Language:English
Published: MDPI AG 2017-11-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/17/12/2735
_version_ 1798040937407971328
author Omid Dehzangi
Mojtaba Taherisadr
Raghvendar ChangalVala
author_facet Omid Dehzangi
Mojtaba Taherisadr
Raghvendar ChangalVala
author_sort Omid Dehzangi
collection DOAJ
description The wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.
first_indexed 2024-04-11T22:14:34Z
format Article
id doaj.art-482c2835336c4999af06936bed640395
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-04-11T22:14:34Z
publishDate 2017-11-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-482c2835336c4999af06936bed6403952022-12-22T04:00:28ZengMDPI AGSensors1424-82202017-11-011712273510.3390/s17122735s17122735IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor FusionOmid Dehzangi0Mojtaba Taherisadr1Raghvendar ChangalVala2Computer and Information Science Department, University of Michigan-Dearborn, Dearborn, MI 48128, USAComputer and Information Science Department, University of Michigan-Dearborn, Dearborn, MI 48128, USAComputer and Information Science Department, University of Michigan-Dearborn, Dearborn, MI 48128, USAThe wide spread usage of wearable sensors such as in smart watches has provided continuous access to valuable user generated data such as human motion that could be used to identify an individual based on his/her motion patterns such as, gait. Several methods have been suggested to extract various heuristic and high-level features from gait motion data to identify discriminative gait signatures and distinguish the target individual from others. However, the manual and hand crafted feature extraction is error prone and subjective. Furthermore, the motion data collected from inertial sensors have complex structure and the detachment between manual feature extraction module and the predictive learning models might limit the generalization capabilities. In this paper, we propose a novel approach for human gait identification using time-frequency (TF) expansion of human gait cycles in order to capture joint 2 dimensional (2D) spectral and temporal patterns of gait cycles. Then, we design a deep convolutional neural network (DCNN) learning to extract discriminative features from the 2D expanded gait cycles and jointly optimize the identification model and the spectro-temporal features in a discriminative fashion. We collect raw motion data from five inertial sensors placed at the chest, lower-back, right hand wrist, right knee, and right ankle of each human subject synchronously in order to investigate the impact of sensor location on the gait identification performance. We then present two methods for early (input level) and late (decision score level) multi-sensor fusion to improve the gait identification generalization performance. We specifically propose the minimum error score fusion (MESF) method that discriminatively learns the linear fusion weights of individual DCNN scores at the decision level by minimizing the error rate on the training data in an iterative manner. 10 subjects participated in this study and hence, the problem is a 10-class identification task. Based on our experimental results, 91% subject identification accuracy was achieved using the best individual IMU and 2DTF-DCNN. We then investigated our proposed early and late sensor fusion approaches, which improved the gait identification accuracy of the system to 93.36% and 97.06%, respectively.https://www.mdpi.com/1424-8220/17/12/2735gait identificationinertial motion analysisspectro-temporal representationdeep convolutional neural networkmulti-sensor fusionerror minimization
spellingShingle Omid Dehzangi
Mojtaba Taherisadr
Raghvendar ChangalVala
IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion
Sensors
gait identification
inertial motion analysis
spectro-temporal representation
deep convolutional neural network
multi-sensor fusion
error minimization
title IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion
title_full IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion
title_fullStr IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion
title_full_unstemmed IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion
title_short IMU-Based Gait Recognition Using Convolutional Neural Networks and Multi-Sensor Fusion
title_sort imu based gait recognition using convolutional neural networks and multi sensor fusion
topic gait identification
inertial motion analysis
spectro-temporal representation
deep convolutional neural network
multi-sensor fusion
error minimization
url https://www.mdpi.com/1424-8220/17/12/2735
work_keys_str_mv AT omiddehzangi imubasedgaitrecognitionusingconvolutionalneuralnetworksandmultisensorfusion
AT mojtabataherisadr imubasedgaitrecognitionusingconvolutionalneuralnetworksandmultisensorfusion
AT raghvendarchangalvala imubasedgaitrecognitionusingconvolutionalneuralnetworksandmultisensorfusion