Mine yOur owN Anatomy: revisiting medical image segmentation with extremely limited labels
Recent studies on contrastive learning have achieved remarkable performance solely by leveraging few labels in the context of medical image segmentation. Existing methods mainly focus on instance discrimination and invariant mapping (i.e., pulling positive samples closer and negative samples apart i...
Main Authors: | , , , , , , , , |
---|---|
Format: | Journal article |
Language: | English |
Published: |
IEEE
2024
|
_version_ | 1824459305221357568 |
---|---|
author | You, C Dai, W Liu, F Min, Y Dvornek, NC Li, X Clifton, DA Staib, L Duncan, JS |
author_facet | You, C Dai, W Liu, F Min, Y Dvornek, NC Li, X Clifton, DA Staib, L Duncan, JS |
author_sort | You, C |
collection | OXFORD |
description | Recent studies on contrastive learning have achieved remarkable performance solely by leveraging few labels in the context of medical image segmentation. Existing methods mainly focus on instance discrimination and invariant mapping (i.e., pulling positive samples closer and negative samples apart in the feature space). However, they face three common pitfalls: (1) tailness: medical image data usually follows an implicit long-tail class distribution. Blindly leveraging all pixels in training hence can lead to the data imbalance issues, and cause deteriorated performance; (2) consistency: it remains unclear whether a segmentation model has learned meaningful and yet consistent anatomical features due to the intra-class variations between different anatomical features; and (3) diversity: the intra-slice correlations within the entire dataset have received significantly less attention. This motivates us to seek a principled approach for strategically making use of the dataset itself to discover similar yet distinct samples from different anatomical views. In this paper, we introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owNAnatomy (MONA), and make three contributions. First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features, mainly due to lacking the supervision signal. We show two simple solutions towards learning invariances-through the use of stronger data augmentations and nearest neighbors. Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features in an unsupervised manner. Lastly, we both empirically and theoretically, demonstrate the efficacy of our MONA on three benchmark datasets with different labeled settings, achieving new state-of-the-art under different labeled semi-supervised settings. MONA makes minimal assumptions on domain expertise, and hence constitutes a practical and versatile solution in medical image analysis. We provide the PyTorch-like pseudo-code in supplementary. |
first_indexed | 2025-02-19T04:39:40Z |
format | Journal article |
id | oxford-uuid:7bcf39a9-d426-435d-aeb7-0c2ac8fb18c5 |
institution | University of Oxford |
language | English |
last_indexed | 2025-02-19T04:39:40Z |
publishDate | 2024 |
publisher | IEEE |
record_format | dspace |
spelling | oxford-uuid:7bcf39a9-d426-435d-aeb7-0c2ac8fb18c52025-02-18T13:05:11ZMine yOur owN Anatomy: revisiting medical image segmentation with extremely limited labelsJournal articlehttp://purl.org/coar/resource_type/c_dcae04bcuuid:7bcf39a9-d426-435d-aeb7-0c2ac8fb18c5EnglishSymplectic ElementsIEEE2024You, CDai, WLiu, FMin, YDvornek, NCLi, XClifton, DAStaib, LDuncan, JSRecent studies on contrastive learning have achieved remarkable performance solely by leveraging few labels in the context of medical image segmentation. Existing methods mainly focus on instance discrimination and invariant mapping (i.e., pulling positive samples closer and negative samples apart in the feature space). However, they face three common pitfalls: (1) tailness: medical image data usually follows an implicit long-tail class distribution. Blindly leveraging all pixels in training hence can lead to the data imbalance issues, and cause deteriorated performance; (2) consistency: it remains unclear whether a segmentation model has learned meaningful and yet consistent anatomical features due to the intra-class variations between different anatomical features; and (3) diversity: the intra-slice correlations within the entire dataset have received significantly less attention. This motivates us to seek a principled approach for strategically making use of the dataset itself to discover similar yet distinct samples from different anatomical views. In this paper, we introduce a novel semi-supervised 2D medical image segmentation framework termed Mine yOur owNAnatomy (MONA), and make three contributions. First, prior work argues that every pixel equally matters to the model training; we observe empirically that this alone is unlikely to define meaningful anatomical features, mainly due to lacking the supervision signal. We show two simple solutions towards learning invariances-through the use of stronger data augmentations and nearest neighbors. Second, we construct a set of objectives that encourage the model to be capable of decomposing medical images into a collection of anatomical features in an unsupervised manner. Lastly, we both empirically and theoretically, demonstrate the efficacy of our MONA on three benchmark datasets with different labeled settings, achieving new state-of-the-art under different labeled semi-supervised settings. MONA makes minimal assumptions on domain expertise, and hence constitutes a practical and versatile solution in medical image analysis. We provide the PyTorch-like pseudo-code in supplementary. |
spellingShingle | You, C Dai, W Liu, F Min, Y Dvornek, NC Li, X Clifton, DA Staib, L Duncan, JS Mine yOur owN Anatomy: revisiting medical image segmentation with extremely limited labels |
title | Mine yOur owN Anatomy: revisiting medical image segmentation with extremely limited labels |
title_full | Mine yOur owN Anatomy: revisiting medical image segmentation with extremely limited labels |
title_fullStr | Mine yOur owN Anatomy: revisiting medical image segmentation with extremely limited labels |
title_full_unstemmed | Mine yOur owN Anatomy: revisiting medical image segmentation with extremely limited labels |
title_short | Mine yOur owN Anatomy: revisiting medical image segmentation with extremely limited labels |
title_sort | mine your own anatomy revisiting medical image segmentation with extremely limited labels |
work_keys_str_mv | AT youc mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels AT daiw mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels AT liuf mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels AT miny mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels AT dvorneknc mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels AT lix mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels AT cliftonda mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels AT staibl mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels AT duncanjs mineyourownanatomyrevisitingmedicalimagesegmentationwithextremelylimitedlabels |