Radar gesture recognition using deep learning: a multi-feature fusion approach

Gesture recognition is an important topic in the field of human-machine interaction. This research begins by reviewing three primary methods for gesture recognition: wearable sensors, vision-based approaches, and radar-based systems. FMCW millimeter-wave radar, with its ability to provide direct fea...

Full description

Bibliographic Details
Main Author: Wu, Huan
Other Authors: Wen Bihan
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2025
Subjects:
Online Access:https://hdl.handle.net/10356/182666
_version_ 1824456499388219392
author Wu, Huan
author2 Wen Bihan
author_facet Wen Bihan
Wu, Huan
author_sort Wu, Huan
collection NTU
description Gesture recognition is an important topic in the field of human-machine interaction. This research begins by reviewing three primary methods for gesture recognition: wearable sensors, vision-based approaches, and radar-based systems. FMCW millimeter-wave radar, with its ability to provide direct feature information on distance, velocity, and angle, along with privacy preservation and robustness to lighting conditions, offers notable technical advantages. Handling multi-feature information from radar is a critical challenge. This study applies signal preprocessing techniques, including constructing Range-Doppler Maps (RDM) and Range-Angle Maps (RAM) using fast Fourier transforms (FFT), enhanced with windowing and clutter suppression techniques to improve data quality. Two neural network architectures are designed: a single-feature CNN+LSTM model and a dual-feature fusion model, aimed at classifying gestures based on RDM, RAM, or their combination. Test results demonstrate that the feature fusion model significantly outperforms single-feature models, achieving a test accuracy of 97%, compared to 92% for the RAM-only model and 83% for the RDM-only model. Furthermore, the model exhibits real-time performance with an average inference time of 0.035 milliseconds per frame, making it suitable for practical applications. This work highlights the potential of radar and deep learning integration for accurate and privacy-reserving gesture recognition in complex environments.
first_indexed 2025-02-19T03:55:04Z
format Thesis-Master by Coursework
id ntu-10356/182666
institution Nanyang Technological University
language English
last_indexed 2025-02-19T03:55:04Z
publishDate 2025
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1826662025-02-16T22:30:43Z Radar gesture recognition using deep learning: a multi-feature fusion approach Wu, Huan Wen Bihan School of Electrical and Electronic Engineering bihan.wen@ntu.edu.sg Computer and Information Science Engineering Millimeter-wave radar Gesture recognition Feature fusion CNN LSTM Gesture recognition is an important topic in the field of human-machine interaction. This research begins by reviewing three primary methods for gesture recognition: wearable sensors, vision-based approaches, and radar-based systems. FMCW millimeter-wave radar, with its ability to provide direct feature information on distance, velocity, and angle, along with privacy preservation and robustness to lighting conditions, offers notable technical advantages. Handling multi-feature information from radar is a critical challenge. This study applies signal preprocessing techniques, including constructing Range-Doppler Maps (RDM) and Range-Angle Maps (RAM) using fast Fourier transforms (FFT), enhanced with windowing and clutter suppression techniques to improve data quality. Two neural network architectures are designed: a single-feature CNN+LSTM model and a dual-feature fusion model, aimed at classifying gestures based on RDM, RAM, or their combination. Test results demonstrate that the feature fusion model significantly outperforms single-feature models, achieving a test accuracy of 97%, compared to 92% for the RAM-only model and 83% for the RDM-only model. Furthermore, the model exhibits real-time performance with an average inference time of 0.035 milliseconds per frame, making it suitable for practical applications. This work highlights the potential of radar and deep learning integration for accurate and privacy-reserving gesture recognition in complex environments. Master's degree 2025-02-16T22:30:43Z 2025-02-16T22:30:43Z 2025 Thesis-Master by Coursework Wu, H. (2025). Radar gesture recognition using deep learning: a multi-feature fusion approach. Master's thesis, Nanyang Technological University, Singapore. https://hdl.handle.net/10356/182666 https://hdl.handle.net/10356/182666 en application/pdf Nanyang Technological University
spellingShingle Computer and Information Science
Engineering
Millimeter-wave radar
Gesture recognition
Feature fusion
CNN
LSTM
Wu, Huan
Radar gesture recognition using deep learning: a multi-feature fusion approach
title Radar gesture recognition using deep learning: a multi-feature fusion approach
title_full Radar gesture recognition using deep learning: a multi-feature fusion approach
title_fullStr Radar gesture recognition using deep learning: a multi-feature fusion approach
title_full_unstemmed Radar gesture recognition using deep learning: a multi-feature fusion approach
title_short Radar gesture recognition using deep learning: a multi-feature fusion approach
title_sort radar gesture recognition using deep learning a multi feature fusion approach
topic Computer and Information Science
Engineering
Millimeter-wave radar
Gesture recognition
Feature fusion
CNN
LSTM
url https://hdl.handle.net/10356/182666
work_keys_str_mv AT wuhuan radargesturerecognitionusingdeeplearningamultifeaturefusionapproach