Self-supervised multi-task representation learning for sequential medical images
Self-supervised representation learning has achieved promising results for downstream visual tasks in natural images. However, its use in the medical domain, where there is an underlying anatomical structural similarity, remains underexplored. To address this shortcoming, we propose a self-supervise...
Main Authors: | , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Springer
2021
|
_version_ | 1797068808167358464 |
---|---|
author | Dong, N Kampffmeyer, M Voiculescu, I |
author_facet | Dong, N Kampffmeyer, M Voiculescu, I |
author_sort | Dong, N |
collection | OXFORD |
description | Self-supervised representation learning has achieved promising results for downstream visual tasks in natural images. However, its use in the medical domain, where there is an underlying anatomical structural similarity, remains underexplored. To address this shortcoming, we propose a self-supervised multi-task representation learning framework for sequential 2D medical images, which explicitly aims to exploit the underlying structures via multiple pretext tasks. Unlike the current state-of-the-art methods, which are designed to only pre-train the encoder for instance discrimination tasks, the proposed framework can pre-train the encoder and the decoder at the same time for dense prediction tasks. We evaluate the representations extracted by the proposed framework on two public whole heart segmentation datasets from different domains. The experimental results show that our proposed framework outperforms MoCo V2, a strong representation learning baseline. Given only a small amount of labeled data, the segmentation networks pre-trained by the proposed framework on unlabeled data can achieve better results than their counterparts trained by standard supervised approaches. |
first_indexed | 2024-03-06T22:15:27Z |
format | Conference item |
id | oxford-uuid:533dbb94-b407-4d1a-b6f4-b51ac43f64cb |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-06T22:15:27Z |
publishDate | 2021 |
publisher | Springer |
record_format | dspace |
spelling | oxford-uuid:533dbb94-b407-4d1a-b6f4-b51ac43f64cb2022-03-26T16:30:23ZSelf-supervised multi-task representation learning for sequential medical imagesConference itemhttp://purl.org/coar/resource_type/c_5794uuid:533dbb94-b407-4d1a-b6f4-b51ac43f64cbEnglishSymplectic ElementsSpringer2021Dong, NKampffmeyer, MVoiculescu, ISelf-supervised representation learning has achieved promising results for downstream visual tasks in natural images. However, its use in the medical domain, where there is an underlying anatomical structural similarity, remains underexplored. To address this shortcoming, we propose a self-supervised multi-task representation learning framework for sequential 2D medical images, which explicitly aims to exploit the underlying structures via multiple pretext tasks. Unlike the current state-of-the-art methods, which are designed to only pre-train the encoder for instance discrimination tasks, the proposed framework can pre-train the encoder and the decoder at the same time for dense prediction tasks. We evaluate the representations extracted by the proposed framework on two public whole heart segmentation datasets from different domains. The experimental results show that our proposed framework outperforms MoCo V2, a strong representation learning baseline. Given only a small amount of labeled data, the segmentation networks pre-trained by the proposed framework on unlabeled data can achieve better results than their counterparts trained by standard supervised approaches. |
spellingShingle | Dong, N Kampffmeyer, M Voiculescu, I Self-supervised multi-task representation learning for sequential medical images |
title | Self-supervised multi-task representation learning for sequential medical images |
title_full | Self-supervised multi-task representation learning for sequential medical images |
title_fullStr | Self-supervised multi-task representation learning for sequential medical images |
title_full_unstemmed | Self-supervised multi-task representation learning for sequential medical images |
title_short | Self-supervised multi-task representation learning for sequential medical images |
title_sort | self supervised multi task representation learning for sequential medical images |
work_keys_str_mv | AT dongn selfsupervisedmultitaskrepresentationlearningforsequentialmedicalimages AT kampffmeyerm selfsupervisedmultitaskrepresentationlearningforsequentialmedicalimages AT voiculescui selfsupervisedmultitaskrepresentationlearningforsequentialmedicalimages |