Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction

For visual tasks like ultrasound (US) scanning, experts direct their gaze towards regions of task-relevant information. Therefore, learning to predict the gaze of sonographers on US videos captures the spatio-temporal patterns that are important for US scanning. The spatial distribution of gaze poin...

Full description

Bibliographic Details
Main Authors: Droste, R, Cai, Y, Sharma, H, Chatelain, P, Papageorghiou, A, Noble, J
Format: Conference item
Language:English
Published: Springer Verlag 2020
_version_ 1797085794251309056
author Droste, R
Cai, Y
Sharma, H
Chatelain, P
Papageorghiou, A
Noble, J
author_facet Droste, R
Cai, Y
Sharma, H
Chatelain, P
Papageorghiou, A
Noble, J
author_sort Droste, R
collection OXFORD
description For visual tasks like ultrasound (US) scanning, experts direct their gaze towards regions of task-relevant information. Therefore, learning to predict the gaze of sonographers on US videos captures the spatio-temporal patterns that are important for US scanning. The spatial distribution of gaze points on video frames can be represented through heat maps termed saliency maps. Here, we propose a temporally bidirectional model for video saliency prediction (BDS-Net), drawing inspiration from modern theories of human cognition. The model consists of a convolutional neural network (CNN) encoder followed by a bidirectional gated-recurrent-unit recurrent convolutional network (GRU-RCN) decoder. The temporal bidirectionality mimics human cognition, which simultaneously reacts to past and predicts future sensory inputs. We train the BDS-Net alongside spatial and temporally one-directional comparative models on the task of predicting saliency in videos of US abdominal circumference plane detection. The BDS-Net outperforms the comparative models on four out of five saliency metrics. We present a qualitative analysis on representative examples to explain the model’s superior performance.
first_indexed 2024-03-07T02:13:01Z
format Conference item
id oxford-uuid:a14df633-3dc5-4918-ba90-09dda3f51363
institution University of Oxford
language English
last_indexed 2024-03-07T02:13:01Z
publishDate 2020
publisher Springer Verlag
record_format dspace
spelling oxford-uuid:a14df633-3dc5-4918-ba90-09dda3f513632022-03-27T02:12:11ZTowards capturing sonographic experience: cognition-inspired ultrasound video saliency predictionConference itemhttp://purl.org/coar/resource_type/c_5794uuid:a14df633-3dc5-4918-ba90-09dda3f51363EnglishSymplectic Elements at OxfordSpringer Verlag2020Droste, RCai, YSharma, HChatelain, PPapageorghiou, ANoble, JFor visual tasks like ultrasound (US) scanning, experts direct their gaze towards regions of task-relevant information. Therefore, learning to predict the gaze of sonographers on US videos captures the spatio-temporal patterns that are important for US scanning. The spatial distribution of gaze points on video frames can be represented through heat maps termed saliency maps. Here, we propose a temporally bidirectional model for video saliency prediction (BDS-Net), drawing inspiration from modern theories of human cognition. The model consists of a convolutional neural network (CNN) encoder followed by a bidirectional gated-recurrent-unit recurrent convolutional network (GRU-RCN) decoder. The temporal bidirectionality mimics human cognition, which simultaneously reacts to past and predicts future sensory inputs. We train the BDS-Net alongside spatial and temporally one-directional comparative models on the task of predicting saliency in videos of US abdominal circumference plane detection. The BDS-Net outperforms the comparative models on four out of five saliency metrics. We present a qualitative analysis on representative examples to explain the model’s superior performance.
spellingShingle Droste, R
Cai, Y
Sharma, H
Chatelain, P
Papageorghiou, A
Noble, J
Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction
title Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction
title_full Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction
title_fullStr Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction
title_full_unstemmed Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction
title_short Towards capturing sonographic experience: cognition-inspired ultrasound video saliency prediction
title_sort towards capturing sonographic experience cognition inspired ultrasound video saliency prediction
work_keys_str_mv AT droster towardscapturingsonographicexperiencecognitioninspiredultrasoundvideosaliencyprediction
AT caiy towardscapturingsonographicexperiencecognitioninspiredultrasoundvideosaliencyprediction
AT sharmah towardscapturingsonographicexperiencecognitioninspiredultrasoundvideosaliencyprediction
AT chatelainp towardscapturingsonographicexperiencecognitioninspiredultrasoundvideosaliencyprediction
AT papageorghioua towardscapturingsonographicexperiencecognitioninspiredultrasoundvideosaliencyprediction
AT noblej towardscapturingsonographicexperiencecognitioninspiredultrasoundvideosaliencyprediction