Learning multi-modal scale-aware attentions for efficient and robust road segmentation

Multi-modal fusion has proven to be beneficial to road segmentation in autonomous driving, where depth is commonly used as complementary data for RGB images to provide robust 3D geometry information. Existing methods adopt an encoder-decoder structure to fuse two modalities for segmentation through...

Full description

Bibliographic Details
Main Author: Zhou, Yunjiao
Other Authors: Xie Lihua
Format: Thesis-Master by Coursework
Language:English
Published: Nanyang Technological University 2022
Subjects:
Online Access:https://hdl.handle.net/10356/159277