WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi

The past decade has demonstrated the potential of human activity recognition (HAR) with WiFi signals owing to non-invasiveness and ubiquity. Previous research has largely concentrated on enhancing precision through sophisticated models. However, the complexity of recognition tasks has been largely n...

Full description

Bibliographic Details
Main Authors: Mingze Yang, Hai Zhu, Runzhe Zhu, Fei Wu, Ling Yin, Yuncheng Yang
Format: Article
Language:English
Published: MDPI AG 2023-02-01
Series:Sensors
Subjects:
Online Access:https://www.mdpi.com/1424-8220/23/5/2612
_version_ 1797614357054488576
author Mingze Yang
Hai Zhu
Runzhe Zhu
Fei Wu
Ling Yin
Yuncheng Yang
author_facet Mingze Yang
Hai Zhu
Runzhe Zhu
Fei Wu
Ling Yin
Yuncheng Yang
author_sort Mingze Yang
collection DOAJ
description The past decade has demonstrated the potential of human activity recognition (HAR) with WiFi signals owing to non-invasiveness and ubiquity. Previous research has largely concentrated on enhancing precision through sophisticated models. However, the complexity of recognition tasks has been largely neglected. Thus, the performance of the HAR system is markedly diminished when tasked with increasing complexities, such as a larger classification number, the confusion of similar actions, and signal distortion To address this issue, we eliminated conventional convolutional and recurrent backbones and proposed WiTransformer, a novel tactic based on pure Transformers. Nevertheless, Transformer-like models are typically suited to large-scale datasets as pretraining models, according to the experience of the Vision Transformer. Therefore, we adopted the Body-coordinate Velocity Profile, a cross-domain WiFi signal feature derived from the channel state information, to reduce the threshold of the Transformers. Based on this, we propose two modified transformer architectures, united spatiotemporal Transformer (UST) and separated spatiotemporal Transformer (SST) to realize WiFi-based human gesture recognition models with task robustness. SST intuitively extracts spatial and temporal data features using two encoders, respectively. By contrast, UST can extract the same three-dimensional features with only a one-dimensional encoder, owing to its well-designed structure. We evaluated SST and UST on four designed task datasets (TDSs) with varying task complexities. The experimental results demonstrate that UST has achieved recognition accuracy of 86.16% on the most complex task dataset TDSs-22, outperforming the other popular backbones. Simultaneously, the accuracy decreases by at most 3.18% when the task complexity increases from TDSs-6 to TDSs-22, which is 0.14–0.2 times that of others. However, as predicted and analyzed, SST fails because of excessive lack of inductive bias and the limited scale of the training data.
first_indexed 2024-03-11T07:10:17Z
format Article
id doaj.art-4a525bcd2a6644b2890164584b7e1d4d
institution Directory Open Access Journal
issn 1424-8220
language English
last_indexed 2024-03-11T07:10:17Z
publishDate 2023-02-01
publisher MDPI AG
record_format Article
series Sensors
spelling doaj.art-4a525bcd2a6644b2890164584b7e1d4d2023-11-17T08:37:11ZengMDPI AGSensors1424-82202023-02-01235261210.3390/s23052612WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFiMingze Yang0Hai Zhu1Runzhe Zhu2Fei Wu3Ling Yin4Yuncheng Yang5School of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201602, ChinaSchool of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201602, ChinaSchool of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201602, ChinaSchool of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201602, ChinaSchool of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201602, ChinaSchool of Electronic and Electrical Engineering, Shanghai University of Engineering Science, Shanghai 201602, ChinaThe past decade has demonstrated the potential of human activity recognition (HAR) with WiFi signals owing to non-invasiveness and ubiquity. Previous research has largely concentrated on enhancing precision through sophisticated models. However, the complexity of recognition tasks has been largely neglected. Thus, the performance of the HAR system is markedly diminished when tasked with increasing complexities, such as a larger classification number, the confusion of similar actions, and signal distortion To address this issue, we eliminated conventional convolutional and recurrent backbones and proposed WiTransformer, a novel tactic based on pure Transformers. Nevertheless, Transformer-like models are typically suited to large-scale datasets as pretraining models, according to the experience of the Vision Transformer. Therefore, we adopted the Body-coordinate Velocity Profile, a cross-domain WiFi signal feature derived from the channel state information, to reduce the threshold of the Transformers. Based on this, we propose two modified transformer architectures, united spatiotemporal Transformer (UST) and separated spatiotemporal Transformer (SST) to realize WiFi-based human gesture recognition models with task robustness. SST intuitively extracts spatial and temporal data features using two encoders, respectively. By contrast, UST can extract the same three-dimensional features with only a one-dimensional encoder, owing to its well-designed structure. We evaluated SST and UST on four designed task datasets (TDSs) with varying task complexities. The experimental results demonstrate that UST has achieved recognition accuracy of 86.16% on the most complex task dataset TDSs-22, outperforming the other popular backbones. Simultaneously, the accuracy decreases by at most 3.18% when the task complexity increases from TDSs-6 to TDSs-22, which is 0.14–0.2 times that of others. However, as predicted and analyzed, SST fails because of excessive lack of inductive bias and the limited scale of the training data.https://www.mdpi.com/1424-8220/23/5/2612body-coordinate velocity profilechannel state informationhuman activity recognitiontransformerWiFi signals
spellingShingle Mingze Yang
Hai Zhu
Runzhe Zhu
Fei Wu
Ling Yin
Yuncheng Yang
WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi
Sensors
body-coordinate velocity profile
channel state information
human activity recognition
transformer
WiFi signals
title WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi
title_full WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi
title_fullStr WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi
title_full_unstemmed WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi
title_short WiTransformer: A Novel Robust Gesture Recognition Sensing Model with WiFi
title_sort witransformer a novel robust gesture recognition sensing model with wifi
topic body-coordinate velocity profile
channel state information
human activity recognition
transformer
WiFi signals
url https://www.mdpi.com/1424-8220/23/5/2612
work_keys_str_mv AT mingzeyang witransformeranovelrobustgesturerecognitionsensingmodelwithwifi
AT haizhu witransformeranovelrobustgesturerecognitionsensingmodelwithwifi
AT runzhezhu witransformeranovelrobustgesturerecognitionsensingmodelwithwifi
AT feiwu witransformeranovelrobustgesturerecognitionsensingmodelwithwifi
AT lingyin witransformeranovelrobustgesturerecognitionsensingmodelwithwifi
AT yunchengyang witransformeranovelrobustgesturerecognitionsensingmodelwithwifi