Distilling base-and-meta network with contrastive learning for few-shot semantic segmentation

Abstract Current studies in few-shot semantic segmentation mostly utilize meta-learning frameworks to obtain models that can be generalized to new categories. However, these models trained on base classes with sufficient annotated samples are biased towards these base classes, which results in seman...

Full description

Bibliographic Details
Main Authors: Xinyue Chen, Yueyi Wang, Yingyue Xu, Miaojing Shi
Format: Article
Language:English
Published: Springer 2023-11-01
Series:Autonomous Intelligent Systems
Subjects:
Online Access:https://doi.org/10.1007/s43684-023-00058-2
Description
Summary:Abstract Current studies in few-shot semantic segmentation mostly utilize meta-learning frameworks to obtain models that can be generalized to new categories. However, these models trained on base classes with sufficient annotated samples are biased towards these base classes, which results in semantic confusion and ambiguity between base classes and new classes. A strategy is to use an additional base learner to recognize the objects of base classes and then refine the prediction results output by the meta learner. In this way, the interaction between these two learners and the way of combining results from the two learners are important. This paper proposes a new model, namely Distilling Base and Meta (DBAM) network by using self-attention mechanism and contrastive learning to enhance the few-shot segmentation performance. First, the self-attention-based ensemble module (SEM) is proposed to produce a more accurate adjustment factor for improving the fusion of two predictions of the two learners. Second, the prototype feature optimization module (PFOM) is proposed to provide an interaction between the two learners, which enhances the ability to distinguish the base classes from the target class by introducing contrastive learning loss. Extensive experiments have demonstrated that our method improves on the PASCAL-5 i under 1-shot and 5-shot settings, respectively.
ISSN:2730-616X