Self-supervised contrastive video-speech representation learning for ultrasound
In medical imaging, manual annotations can be expensive to acquire and sometimes infeasible to access, making conventional deep learning-based models difficult to scale. As a result, it would be beneficial if useful representations could be derived from raw data without the need for manual annotatio...
Main Authors: | , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Springer
2020
|
_version_ | 1797081548942475264 |
---|---|
author | Jiao, J Cai, Y Alsharid, M Drukker, L Papageorghiou, AT Noble, JA |
author_facet | Jiao, J Cai, Y Alsharid, M Drukker, L Papageorghiou, AT Noble, JA |
author_sort | Jiao, J |
collection | OXFORD |
description | In medical imaging, manual annotations can be expensive to acquire and sometimes infeasible to access, making conventional deep learning-based models difficult to scale. As a result, it would be beneficial if useful representations could be derived from raw data without the need for manual annotations. In this paper, we propose to address the problem of self-supervised representation learning with multi-modal ultrasound video-speech raw data. For this case, we assume that there is a high correlation between the ultrasound video and the corresponding narrative speech audio of the sonographer. In order to learn meaningful representations, the model needs to identify such correlation and at the same time understand the underlying anatomical features. We designed a framework to model the correspondence between video and audio without any kind of human annotations. Within this framework, we introduce cross-modal contrastive learning and an affinity-aware self-paced learning scheme to enhance correlation modelling. Experimental evaluations on multi-modal fetal ultrasound video and audio show that the proposed approach is able to learn strong representations and transfers well to downstream tasks of standard plane detection and eye-gaze prediction. |
first_indexed | 2024-03-07T01:15:48Z |
format | Conference item |
id | oxford-uuid:8e9d618f-49fb-4441-a980-1c77cc1a82cf |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-07T01:15:48Z |
publishDate | 2020 |
publisher | Springer |
record_format | dspace |
spelling | oxford-uuid:8e9d618f-49fb-4441-a980-1c77cc1a82cf2022-03-26T22:59:05ZSelf-supervised contrastive video-speech representation learning for ultrasoundConference itemhttp://purl.org/coar/resource_type/c_5794uuid:8e9d618f-49fb-4441-a980-1c77cc1a82cfEnglishSymplectic ElementsSpringer2020Jiao, JCai, YAlsharid, MDrukker, LPapageorghiou, ATNoble, JAIn medical imaging, manual annotations can be expensive to acquire and sometimes infeasible to access, making conventional deep learning-based models difficult to scale. As a result, it would be beneficial if useful representations could be derived from raw data without the need for manual annotations. In this paper, we propose to address the problem of self-supervised representation learning with multi-modal ultrasound video-speech raw data. For this case, we assume that there is a high correlation between the ultrasound video and the corresponding narrative speech audio of the sonographer. In order to learn meaningful representations, the model needs to identify such correlation and at the same time understand the underlying anatomical features. We designed a framework to model the correspondence between video and audio without any kind of human annotations. Within this framework, we introduce cross-modal contrastive learning and an affinity-aware self-paced learning scheme to enhance correlation modelling. Experimental evaluations on multi-modal fetal ultrasound video and audio show that the proposed approach is able to learn strong representations and transfers well to downstream tasks of standard plane detection and eye-gaze prediction. |
spellingShingle | Jiao, J Cai, Y Alsharid, M Drukker, L Papageorghiou, AT Noble, JA Self-supervised contrastive video-speech representation learning for ultrasound |
title | Self-supervised contrastive video-speech representation learning for ultrasound |
title_full | Self-supervised contrastive video-speech representation learning for ultrasound |
title_fullStr | Self-supervised contrastive video-speech representation learning for ultrasound |
title_full_unstemmed | Self-supervised contrastive video-speech representation learning for ultrasound |
title_short | Self-supervised contrastive video-speech representation learning for ultrasound |
title_sort | self supervised contrastive video speech representation learning for ultrasound |
work_keys_str_mv | AT jiaoj selfsupervisedcontrastivevideospeechrepresentationlearningforultrasound AT caiy selfsupervisedcontrastivevideospeechrepresentationlearningforultrasound AT alsharidm selfsupervisedcontrastivevideospeechrepresentationlearningforultrasound AT drukkerl selfsupervisedcontrastivevideospeechrepresentationlearningforultrasound AT papageorghiouat selfsupervisedcontrastivevideospeechrepresentationlearningforultrasound AT nobleja selfsupervisedcontrastivevideospeechrepresentationlearningforultrasound |