Multi-sensor fusion based on multiple classifier systems for human activity identification

Multimodal sensors in healthcare applications have been increasingly researched because it facilitates automatic and comprehensive monitoring of human behaviors, high-intensity sports management, energy expenditure estimation, and postural detection. Recent studies have shown the importance of multi...

Full description

Bibliographic Details
Main Authors: Nweke, Henry Friday, Teh, Ying Wah, Mujtaba, Ghulam, Alo, Uzoma Rita, Al-garadi, Mohammed Ali
Format: Article
Published: SpringerOpen 2019
Subjects:
_version_ 1825722129705861120
author Nweke, Henry Friday
Teh, Ying Wah
Mujtaba, Ghulam
Alo, Uzoma Rita
Al-garadi, Mohammed Ali
author_facet Nweke, Henry Friday
Teh, Ying Wah
Mujtaba, Ghulam
Alo, Uzoma Rita
Al-garadi, Mohammed Ali
author_sort Nweke, Henry Friday
collection UM
description Multimodal sensors in healthcare applications have been increasingly researched because it facilitates automatic and comprehensive monitoring of human behaviors, high-intensity sports management, energy expenditure estimation, and postural detection. Recent studies have shown the importance of multi-sensor fusion to achieve robustness, high-performance generalization, provide diversity and tackle challenging issue that maybe difficult with single sensor values. The aim of this study is to propose an innovative multi-sensor fusion framework to improve human activity detection performances and reduce misrecognition rate. The study proposes a multi-view ensemble algorithm to integrate predicted values of different motion sensors. To this end, computationally efficient classification algorithms such as decision tree, logistic regression and k-Nearest Neighbors were used to implement diverse, flexible and dynamic human activity detection systems. To provide compact feature vector representation, we studied hybrid bio-inspired evolutionary search algorithm and correlation-based feature selection method and evaluate their impact on extracted feature vectors from individual sensor modality. Furthermore, we utilized Synthetic Over-sampling minority Techniques (SMOTE) algorithm to reduce the impact of class imbalance and improve performance results. With the above methods, this paper provides unified framework to resolve major challenges in human activity identification. The performance results obtained using two publicly available datasets showed significant improvement over baseline methods in the detection of specific activity details and reduced error rate. The performance results of our evaluation showed 3% to 24% improvement in accuracy, recall, precision, F-measure and detection ability (AUC) compared to single sensors and feature-level fusion. The benefit of the proposed multi-sensor fusion is the ability to utilize distinct feature characteristics of individual sensor and multiple classifier systems to improve recognition accuracy. In addition, the study suggests a promising potential of hybrid feature selection approach, diversity-based multiple classifier systems to improve mobile and wearable sensor-based human activity detection and health monitoring system. © 2019, The Author(s).
first_indexed 2024-03-06T06:00:57Z
format Article
id um.eprints-23807
institution Universiti Malaya
last_indexed 2024-03-06T06:00:57Z
publishDate 2019
publisher SpringerOpen
record_format dspace
spelling um.eprints-238072020-02-17T00:50:05Z http://eprints.um.edu.my/23807/ Multi-sensor fusion based on multiple classifier systems for human activity identification Nweke, Henry Friday Teh, Ying Wah Mujtaba, Ghulam Alo, Uzoma Rita Al-garadi, Mohammed Ali QA75 Electronic computers. Computer science Multimodal sensors in healthcare applications have been increasingly researched because it facilitates automatic and comprehensive monitoring of human behaviors, high-intensity sports management, energy expenditure estimation, and postural detection. Recent studies have shown the importance of multi-sensor fusion to achieve robustness, high-performance generalization, provide diversity and tackle challenging issue that maybe difficult with single sensor values. The aim of this study is to propose an innovative multi-sensor fusion framework to improve human activity detection performances and reduce misrecognition rate. The study proposes a multi-view ensemble algorithm to integrate predicted values of different motion sensors. To this end, computationally efficient classification algorithms such as decision tree, logistic regression and k-Nearest Neighbors were used to implement diverse, flexible and dynamic human activity detection systems. To provide compact feature vector representation, we studied hybrid bio-inspired evolutionary search algorithm and correlation-based feature selection method and evaluate their impact on extracted feature vectors from individual sensor modality. Furthermore, we utilized Synthetic Over-sampling minority Techniques (SMOTE) algorithm to reduce the impact of class imbalance and improve performance results. With the above methods, this paper provides unified framework to resolve major challenges in human activity identification. The performance results obtained using two publicly available datasets showed significant improvement over baseline methods in the detection of specific activity details and reduced error rate. The performance results of our evaluation showed 3% to 24% improvement in accuracy, recall, precision, F-measure and detection ability (AUC) compared to single sensors and feature-level fusion. The benefit of the proposed multi-sensor fusion is the ability to utilize distinct feature characteristics of individual sensor and multiple classifier systems to improve recognition accuracy. In addition, the study suggests a promising potential of hybrid feature selection approach, diversity-based multiple classifier systems to improve mobile and wearable sensor-based human activity detection and health monitoring system. © 2019, The Author(s). SpringerOpen 2019 Article PeerReviewed Nweke, Henry Friday and Teh, Ying Wah and Mujtaba, Ghulam and Alo, Uzoma Rita and Al-garadi, Mohammed Ali (2019) Multi-sensor fusion based on multiple classifier systems for human activity identification. Human-centric Computing and Information Sciences, 9 (1). p. 34. ISSN 2192-1962, DOI https://doi.org/10.1186/s13673-019-0194-5 <https://doi.org/10.1186/s13673-019-0194-5>. https://doi.org/10.1186/s13673-019-0194-5 doi:10.1186/s13673-019-0194-5
spellingShingle QA75 Electronic computers. Computer science
Nweke, Henry Friday
Teh, Ying Wah
Mujtaba, Ghulam
Alo, Uzoma Rita
Al-garadi, Mohammed Ali
Multi-sensor fusion based on multiple classifier systems for human activity identification
title Multi-sensor fusion based on multiple classifier systems for human activity identification
title_full Multi-sensor fusion based on multiple classifier systems for human activity identification
title_fullStr Multi-sensor fusion based on multiple classifier systems for human activity identification
title_full_unstemmed Multi-sensor fusion based on multiple classifier systems for human activity identification
title_short Multi-sensor fusion based on multiple classifier systems for human activity identification
title_sort multi sensor fusion based on multiple classifier systems for human activity identification
topic QA75 Electronic computers. Computer science
work_keys_str_mv AT nwekehenryfriday multisensorfusionbasedonmultipleclassifiersystemsforhumanactivityidentification
AT tehyingwah multisensorfusionbasedonmultipleclassifiersystemsforhumanactivityidentification
AT mujtabaghulam multisensorfusionbasedonmultipleclassifiersystemsforhumanactivityidentification
AT alouzomarita multisensorfusionbasedonmultipleclassifiersystemsforhumanactivityidentification
AT algaradimohammedali multisensorfusionbasedonmultipleclassifiersystemsforhumanactivityidentification