Deep multi-modal learning for radar-vision human sensing

The emergence of the Internet of Things (IoT) has facilitated the proliferation of smart devices in daily life. These devices possess a notable characteristic that sets them apart from traditional ones: the ability to perceive their physical surroundings using wireless sensors such as RGBD cameras,...

Full description

Bibliographic Details
Main Author: Chen, Xinyan
Other Authors: Xie Lihua
Format: Final Year Project (FYP)
Language:English
Published: Nanyang Technological University 2023
Subjects:
Online Access:https://hdl.handle.net/10356/167765
_version_ 1826116363183194112
author Chen, Xinyan
author2 Xie Lihua
author_facet Xie Lihua
Chen, Xinyan
author_sort Chen, Xinyan
collection NTU
description The emergence of the Internet of Things (IoT) has facilitated the proliferation of smart devices in daily life. These devices possess a notable characteristic that sets them apart from traditional ones: the ability to perceive their physical surroundings using wireless sensors such as RGBD cameras, WiFi, LiDAR, millimeter-Wave (mmWave) radars, and others. The prevalent vision-based sensing approach is unsuitable for indoor environments that demand privacy protection, possess environmental complexity, or require low energy consumption. In this project, we propose to utilize 60-64 GHz mmWave radar as a low-cost, low-power-consumption, low-environmental-requirements, and privacy-preserving solution for 2D human pose estimation, one of the most fundamental human sensing tasks. In our proposed method, supervision for mmWave-based human sensing is generated from synchronized RGB frames and the human pose landmarks are extracted from 5D mmWave point clouds by using a point transformer-based deep learning network. We gather a multi-modal dataset and perform feasibility studies across various application scenarios and develop multiple experimental protocols to simulate potential obstacles encountered in real-world deployment scenarios. The result shows that the utilization of 60-64 GHz mmWave radar is viable for 2D human pose estimation and can yield comparable results with vision-based solutions.
first_indexed 2024-10-01T04:10:30Z
format Final Year Project (FYP)
id ntu-10356/167765
institution Nanyang Technological University
language English
last_indexed 2024-10-01T04:10:30Z
publishDate 2023
publisher Nanyang Technological University
record_format dspace
spelling ntu-10356/1677652023-07-07T17:33:56Z Deep multi-modal learning for radar-vision human sensing Chen, Xinyan Xie Lihua School of Electrical and Electronic Engineering ELHXIE@ntu.edu.sg Engineering::Electrical and electronic engineering The emergence of the Internet of Things (IoT) has facilitated the proliferation of smart devices in daily life. These devices possess a notable characteristic that sets them apart from traditional ones: the ability to perceive their physical surroundings using wireless sensors such as RGBD cameras, WiFi, LiDAR, millimeter-Wave (mmWave) radars, and others. The prevalent vision-based sensing approach is unsuitable for indoor environments that demand privacy protection, possess environmental complexity, or require low energy consumption. In this project, we propose to utilize 60-64 GHz mmWave radar as a low-cost, low-power-consumption, low-environmental-requirements, and privacy-preserving solution for 2D human pose estimation, one of the most fundamental human sensing tasks. In our proposed method, supervision for mmWave-based human sensing is generated from synchronized RGB frames and the human pose landmarks are extracted from 5D mmWave point clouds by using a point transformer-based deep learning network. We gather a multi-modal dataset and perform feasibility studies across various application scenarios and develop multiple experimental protocols to simulate potential obstacles encountered in real-world deployment scenarios. The result shows that the utilization of 60-64 GHz mmWave radar is viable for 2D human pose estimation and can yield comparable results with vision-based solutions. Bachelor of Engineering (Electrical and Electronic Engineering) 2023-05-18T02:08:30Z 2023-05-18T02:08:30Z 2023 Final Year Project (FYP) Chen, X. (2023). Deep multi-modal learning for radar-vision human sensing. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/167765 https://hdl.handle.net/10356/167765 en application/pdf Nanyang Technological University
spellingShingle Engineering::Electrical and electronic engineering
Chen, Xinyan
Deep multi-modal learning for radar-vision human sensing
title Deep multi-modal learning for radar-vision human sensing
title_full Deep multi-modal learning for radar-vision human sensing
title_fullStr Deep multi-modal learning for radar-vision human sensing
title_full_unstemmed Deep multi-modal learning for radar-vision human sensing
title_short Deep multi-modal learning for radar-vision human sensing
title_sort deep multi modal learning for radar vision human sensing
topic Engineering::Electrical and electronic engineering
url https://hdl.handle.net/10356/167765
work_keys_str_mv AT chenxinyan deepmultimodallearningforradarvisionhumansensing