DeepInteraction++: multi-modality interaction for autonomous driving
Existing top-performance autonomous driving systems typically rely on the multi-modal fusion strategy for reliable scene understanding. This design is however fundamentally restricted due to overlooking the modality-specific strengths and finally hampering the model performance. To address this limi...
Main Authors: | , , , , , |
---|---|
Format: | Internet publication |
Language: | English |
Published: |
2024
|
_version_ | 1826314767585771520 |
---|---|
author | Yang, Z Song, N Li, W Zhu, X Zhang, L Torr, PHS |
author_facet | Yang, Z Song, N Li, W Zhu, X Zhang, L Torr, PHS |
author_sort | Yang, Z |
collection | OXFORD |
description | Existing top-performance autonomous driving systems typically rely on the multi-modal fusion strategy for reliable scene understanding. This design is however fundamentally restricted due to overlooking the modality-specific strengths and finally hampering the model performance. To address this limitation, in this work, we introduce a novel modality interaction strategy that allows individual per-modality representations to be learned and maintained throughout, enabling their unique characteristics to be exploited during the whole perception pipeline. To demonstrate the effectiveness of the proposed strategy, we design DeepInteraction++, a multi-modal interaction framework characterized by a multi-modal representational interaction encoder and a multi-modal predictive interaction decoder. Specifically, the encoder is implemented as a dual-stream Transformer with specialized attention operation for information exchange and integration between separate modality-specific representations. Our multi-modal representational learning incorporates both object-centric, precise sampling-based feature alignment and global dense information spreading, essential for the more challenging planning task. The decoder is designed to iteratively refine the predictions by alternately aggregating information from separate representations in a unified modalityagnostic manner, realizing multi-modal predictive interaction. Extensive experiments demonstrate the superior performance of the proposed framework on both 3D object detection and end-to-end autonomous driving tasks. Our code is available at https://github.com/fudan-zvg/DeepInteraction. |
first_indexed | 2024-12-09T03:12:51Z |
format | Internet publication |
id | oxford-uuid:fe232b7a-568e-44cf-84ac-56d5d37ae1c6 |
institution | University of Oxford |
language | English |
last_indexed | 2024-12-09T03:12:51Z |
publishDate | 2024 |
record_format | dspace |
spelling | oxford-uuid:fe232b7a-568e-44cf-84ac-56d5d37ae1c62024-10-10T15:24:40ZDeepInteraction++: multi-modality interaction for autonomous drivingInternet publicationhttp://purl.org/coar/resource_type/c_7ad9uuid:fe232b7a-568e-44cf-84ac-56d5d37ae1c6EnglishSymplectic Elements2024Yang, ZSong, NLi, WZhu, XZhang, LTorr, PHSExisting top-performance autonomous driving systems typically rely on the multi-modal fusion strategy for reliable scene understanding. This design is however fundamentally restricted due to overlooking the modality-specific strengths and finally hampering the model performance. To address this limitation, in this work, we introduce a novel modality interaction strategy that allows individual per-modality representations to be learned and maintained throughout, enabling their unique characteristics to be exploited during the whole perception pipeline. To demonstrate the effectiveness of the proposed strategy, we design DeepInteraction++, a multi-modal interaction framework characterized by a multi-modal representational interaction encoder and a multi-modal predictive interaction decoder. Specifically, the encoder is implemented as a dual-stream Transformer with specialized attention operation for information exchange and integration between separate modality-specific representations. Our multi-modal representational learning incorporates both object-centric, precise sampling-based feature alignment and global dense information spreading, essential for the more challenging planning task. The decoder is designed to iteratively refine the predictions by alternately aggregating information from separate representations in a unified modalityagnostic manner, realizing multi-modal predictive interaction. Extensive experiments demonstrate the superior performance of the proposed framework on both 3D object detection and end-to-end autonomous driving tasks. Our code is available at https://github.com/fudan-zvg/DeepInteraction. |
spellingShingle | Yang, Z Song, N Li, W Zhu, X Zhang, L Torr, PHS DeepInteraction++: multi-modality interaction for autonomous driving |
title | DeepInteraction++: multi-modality interaction for autonomous driving |
title_full | DeepInteraction++: multi-modality interaction for autonomous driving |
title_fullStr | DeepInteraction++: multi-modality interaction for autonomous driving |
title_full_unstemmed | DeepInteraction++: multi-modality interaction for autonomous driving |
title_short | DeepInteraction++: multi-modality interaction for autonomous driving |
title_sort | deepinteraction multi modality interaction for autonomous driving |
work_keys_str_mv | AT yangz deepinteractionmultimodalityinteractionforautonomousdriving AT songn deepinteractionmultimodalityinteractionforautonomousdriving AT liw deepinteractionmultimodalityinteractionforautonomousdriving AT zhux deepinteractionmultimodalityinteractionforautonomousdriving AT zhangl deepinteractionmultimodalityinteractionforautonomousdriving AT torrphs deepinteractionmultimodalityinteractionforautonomousdriving |