Conditional segmentation in lieu of image registration
Classical pairwise image registration methods search for a spatial transformation that optimises a numerical measure that indicates how well a pair of moving and fixed images are aligned. Current learning-based registration methods have adopted the same paradigm and typically predict, for any new in...
Main Authors: | , , , , , |
---|---|
Format: | Conference item |
Language: | English |
Published: |
Springer
2019
|
_version_ | 1797060934646104064 |
---|---|
author | Hu, Y E Gibson Barratt, DC Emberton, M Noble, JA Vercauteren, T |
author_facet | Hu, Y E Gibson Barratt, DC Emberton, M Noble, JA Vercauteren, T |
author_sort | Hu, Y |
collection | OXFORD |
description | Classical pairwise image registration methods search for a spatial transformation that optimises a numerical measure that indicates how well a pair of moving and fixed images are aligned. Current learning-based registration methods have adopted the same paradigm and typically predict, for any new input image pair, dense correspondences in the form of a dense displacement field or parameters of a spatial transformation model. However, in many applications of registration, the spatial transformation itself is only required to propagate points or regions of interest (ROIs). In such cases, detailed pixel- or voxel-level correspondence within or outside of these ROIs often have little clinical value. In this paper, we propose an alternative paradigm in which the location of corresponding image-specific ROIs, defined in one image, within another image is learnt. This results in replacing image registration by a conditional segmentation algorithm, which can build on typical image segmentation networks and their widely-adopted training strategies. Using the registration of 3D MRI and ultrasound images of the prostate as an example to demonstrate this new approach, we report a median target registration error (TRE) of 2.1 mm between the ground-truth ROIs defined on intraoperative ultrasound images and those propagated from the preoperative MR images. Significantly lower (>34%) TREs were obtained using the proposed conditional segmentation compared with those obtained from a previously-proposed spatial-transformation-predicting registration network trained with the same multiple ROI labels for individual image pairs. We conclude this work by using a quantitative bias-variance analysis to provide one explanation of the observed improvement in registration accuracy. |
first_indexed | 2024-03-06T20:23:55Z |
format | Conference item |
id | oxford-uuid:2ec1b884-a5e5-4b19-833e-3e47dfd603aa |
institution | University of Oxford |
language | English |
last_indexed | 2024-03-06T20:23:55Z |
publishDate | 2019 |
publisher | Springer |
record_format | dspace |
spelling | oxford-uuid:2ec1b884-a5e5-4b19-833e-3e47dfd603aa2022-03-26T12:50:50ZConditional segmentation in lieu of image registrationConference itemhttp://purl.org/coar/resource_type/c_5794uuid:2ec1b884-a5e5-4b19-833e-3e47dfd603aaEnglishSymplectic ElementsSpringer2019Hu, YE GibsonBarratt, DCEmberton, MNoble, JAVercauteren, TClassical pairwise image registration methods search for a spatial transformation that optimises a numerical measure that indicates how well a pair of moving and fixed images are aligned. Current learning-based registration methods have adopted the same paradigm and typically predict, for any new input image pair, dense correspondences in the form of a dense displacement field or parameters of a spatial transformation model. However, in many applications of registration, the spatial transformation itself is only required to propagate points or regions of interest (ROIs). In such cases, detailed pixel- or voxel-level correspondence within or outside of these ROIs often have little clinical value. In this paper, we propose an alternative paradigm in which the location of corresponding image-specific ROIs, defined in one image, within another image is learnt. This results in replacing image registration by a conditional segmentation algorithm, which can build on typical image segmentation networks and their widely-adopted training strategies. Using the registration of 3D MRI and ultrasound images of the prostate as an example to demonstrate this new approach, we report a median target registration error (TRE) of 2.1 mm between the ground-truth ROIs defined on intraoperative ultrasound images and those propagated from the preoperative MR images. Significantly lower (>34%) TREs were obtained using the proposed conditional segmentation compared with those obtained from a previously-proposed spatial-transformation-predicting registration network trained with the same multiple ROI labels for individual image pairs. We conclude this work by using a quantitative bias-variance analysis to provide one explanation of the observed improvement in registration accuracy. |
spellingShingle | Hu, Y E Gibson Barratt, DC Emberton, M Noble, JA Vercauteren, T Conditional segmentation in lieu of image registration |
title | Conditional segmentation in lieu of image registration |
title_full | Conditional segmentation in lieu of image registration |
title_fullStr | Conditional segmentation in lieu of image registration |
title_full_unstemmed | Conditional segmentation in lieu of image registration |
title_short | Conditional segmentation in lieu of image registration |
title_sort | conditional segmentation in lieu of image registration |
work_keys_str_mv | AT huy conditionalsegmentationinlieuofimageregistration AT egibson conditionalsegmentationinlieuofimageregistration AT barrattdc conditionalsegmentationinlieuofimageregistration AT embertonm conditionalsegmentationinlieuofimageregistration AT nobleja conditionalsegmentationinlieuofimageregistration AT vercauterent conditionalsegmentationinlieuofimageregistration |