A Survey on Visual Mamba

State space models (SSM) with selection mechanisms and hardware-aware architectures, namely Mamba, have recently shown significant potential in long-sequence modeling. Since the complexity of transformers’ self-attention mechanism is quadratic with image size, as well as increasing computational dem...

Full description

Bibliographic Details
Main Authors: Zhang, H, Zhu, Y, Wang, D, Zhang, L, Chen, T, Wang, Z, Ye, Z
Format: Journal article
Language:English
Published: MDPI 2024
_version_ 1811139956071989248
author Zhang, H
Zhu, Y
Wang, D
Zhang, L
Chen, T
Wang, Z
Ye, Z
author_facet Zhang, H
Zhu, Y
Wang, D
Zhang, L
Chen, T
Wang, Z
Ye, Z
author_sort Zhang, H
collection OXFORD
description State space models (SSM) with selection mechanisms and hardware-aware architectures, namely Mamba, have recently shown significant potential in long-sequence modeling. Since the complexity of transformers’ self-attention mechanism is quadratic with image size, as well as increasing computational demands, researchers are currently exploring how to adapt Mamba for computer vision tasks. This paper is the first comprehensive survey that aims to provide an in-depth analysis of Mamba models within the domain of computer vision. It begins by exploring the foundational concepts contributing to Mamba’s success, including the SSM framework, selection mechanisms, and hardware-aware design. Then, we review these vision Mamba models by categorizing them into foundational models and those enhanced with techniques including convolution, recurrence, and attention to improve their sophistication. Furthermore, we investigate the widespread applications of Mamba in vision tasks, which include their use as a backbone in various levels of vision processing. This encompasses general visual tasks, medical visual tasks (e.g., 2D/3D segmentation, classification, image registration, etc.), and remote sensing visual tasks. In particular, we introduce general visual tasks from two levels: high/mid-level vision (e.g., object detection, segmentation, video classification, etc.) and low-level vision (e.g., image super-resolution, image restoration, visual generation, etc.). We hope this endeavor will spark additional interest within the community to address current challenges and further apply Mamba models in computer vision.
first_indexed 2024-09-25T04:14:19Z
format Journal article
id oxford-uuid:ba6eb639-9238-4a74-808e-3d2eb2e795da
institution University of Oxford
language English
last_indexed 2024-09-25T04:14:19Z
publishDate 2024
publisher MDPI
record_format dspace
spelling oxford-uuid:ba6eb639-9238-4a74-808e-3d2eb2e795da2024-07-10T20:08:44ZA Survey on Visual MambaJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:ba6eb639-9238-4a74-808e-3d2eb2e795daEnglishJisc Publications RouterMDPI2024Zhang, HZhu, YWang, DZhang, LChen, TWang, ZYe, ZState space models (SSM) with selection mechanisms and hardware-aware architectures, namely Mamba, have recently shown significant potential in long-sequence modeling. Since the complexity of transformers’ self-attention mechanism is quadratic with image size, as well as increasing computational demands, researchers are currently exploring how to adapt Mamba for computer vision tasks. This paper is the first comprehensive survey that aims to provide an in-depth analysis of Mamba models within the domain of computer vision. It begins by exploring the foundational concepts contributing to Mamba’s success, including the SSM framework, selection mechanisms, and hardware-aware design. Then, we review these vision Mamba models by categorizing them into foundational models and those enhanced with techniques including convolution, recurrence, and attention to improve their sophistication. Furthermore, we investigate the widespread applications of Mamba in vision tasks, which include their use as a backbone in various levels of vision processing. This encompasses general visual tasks, medical visual tasks (e.g., 2D/3D segmentation, classification, image registration, etc.), and remote sensing visual tasks. In particular, we introduce general visual tasks from two levels: high/mid-level vision (e.g., object detection, segmentation, video classification, etc.) and low-level vision (e.g., image super-resolution, image restoration, visual generation, etc.). We hope this endeavor will spark additional interest within the community to address current challenges and further apply Mamba models in computer vision.
spellingShingle Zhang, H
Zhu, Y
Wang, D
Zhang, L
Chen, T
Wang, Z
Ye, Z
A Survey on Visual Mamba
title A Survey on Visual Mamba
title_full A Survey on Visual Mamba
title_fullStr A Survey on Visual Mamba
title_full_unstemmed A Survey on Visual Mamba
title_short A Survey on Visual Mamba
title_sort survey on visual mamba
work_keys_str_mv AT zhangh asurveyonvisualmamba
AT zhuy asurveyonvisualmamba
AT wangd asurveyonvisualmamba
AT zhangl asurveyonvisualmamba
AT chent asurveyonvisualmamba
AT wangz asurveyonvisualmamba
AT yez asurveyonvisualmamba
AT zhangh surveyonvisualmamba
AT zhuy surveyonvisualmamba
AT wangd surveyonvisualmamba
AT zhangl surveyonvisualmamba
AT chent surveyonvisualmamba
AT wangz surveyonvisualmamba
AT yez surveyonvisualmamba