Multi-modal learning from video, eye tracking, and pupillometry for operator skill characterization in clinical fetal ultrasound
This paper presents a novel multi-modal learning approach for automated skill characterization of obstetric ultrasound operators using heterogeneous spatio-temporal sensory cues, namely, scan video, eye-tracking data, and pupillometric data, acquired in the clinical environment. We address pertinent...
Auteurs principaux: | Sharma, H, Drukker, L, Papageorghiou, AT, Noble, JA |
---|---|
Format: | Conference item |
Langue: | English |
Publié: |
IEEE
2021
|
Documents similaires
-
Skill, or style? Classification of fetal sonography eye-tracking data
par: Teng, C, et autres
Publié: (2022) -
Multimodal continual learning with sonographer eye-tracking in fetal ultrasound
par: Patra, A, et autres
Publié: (2021) -
Differentiating operator skill during routine fetal ultrasound scanning using probe motion tracking
par: Wang, Y, et autres
Publié: (2020) -
Gaze-assisted automatic captioning of fetal ultrasound videos using three-way multi-modal deep neural networks
par: Alsharid, M, et autres
Publié: (2022) -
Transforming obstetric ultrasound into data science using eye tracking, voice recording, transducer motion and ultrasound video
par: Drukker, L, et autres
Publié: (2021)