Automated description and workflow analysis of fetal echocardiography in first-trimester ultrasound video scans

This paper presents a novel, fully-automatic framework for fetal echocardiography analysis of full-length routine firsttrimester fetal ultrasound scan video. In this study, a new deep learning architecture, which considers spatio-temporal information and spatial attention, is designed to temporally...

詳細記述

書誌詳細
主要な著者: Yasrab, R, Alsharid, M, Sarker, MD, Zhao, H, Papageorghiou, A, Noble, J
フォーマット: Conference item
言語:English
出版事項: IEEE 2023
その他の書誌記述
要約:This paper presents a novel, fully-automatic framework for fetal echocardiography analysis of full-length routine firsttrimester fetal ultrasound scan video. In this study, a new deep learning architecture, which considers spatio-temporal information and spatial attention, is designed to temporally partition ultrasound video into semantically meaningful segments. The resulting automated semantic annotation is used to analyse cardiac examination workflow. The proposed 2D+t convolution neural network architecture achieves an A1 accuracy of 96.37%, F1 of 95.61%, and precision of 96.18% with 21.49% fewer parameters than the smallest ResNet-based architecture. Automated deep-learning based semantic annotation of unlabelled video scans (n=250) shows a high correlation with expert cardiac annotations (ρ = 0.96, p = 0.0004), thereby demonstrating the applicability of the proposed annotation model for echocardiography workflow analysis.