Learning multi-modal scale-aware attentions for efficient and robust road segmentation
Multi-modal fusion has proven to be beneficial to road segmentation in autonomous driving, where depth is commonly used as complementary data for RGB images to provide robust 3D geometry information. Existing methods adopt an encoder-decoder structure to fuse two modalities for segmentation through...
Main Author: | Zhou, Yunjiao |
---|---|
Other Authors: | Xie Lihua |
Format: | Thesis-Master by Coursework |
Language: | English |
Published: |
Nanyang Technological University
2022
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/159277 |
Similar Items
-
Automatic knee segmentation from multi-contrast MR images
by: Zhang, Kunlei
Published: (2013) -
Hardware implementation of a power efficient CGRA with single-cycle multi-hop datapaths
by: Su, Lingzhi
Published: (2022) -
WiFi-vision enabled identification via multi-modal gait recognition
by: Deng, Lang
Published: (2022) -
Visual-inertial-GPS fusion for robust UAV navigation
by: Tan, Zhi Heng
Published: (2023) -
Robust deep learning on graphs using neural PDEs
by: Gui, Pengzhe
Published: (2023)