Learning to Segment Unseen Tasks In-Context

While deep learning models have become the predominant method for medical image segmentation, they are typically incapable of generalizing to new segmentation tasks---involving new anatomies, image modalities, or labels. For a new segmentation task, researchers will often have to prepare new task-sp...

Full description

Bibliographic Details
Main Author: Butoi, Victor Ion
Other Authors: Guttag, John V.
Format: Thesis
Published: Massachusetts Institute of Technology 2024
Online Access:https://hdl.handle.net/1721.1/156117
Description
Summary:While deep learning models have become the predominant method for medical image segmentation, they are typically incapable of generalizing to new segmentation tasks---involving new anatomies, image modalities, or labels. For a new segmentation task, researchers will often have to prepare new task-specific models. This process is time-consuming and poses a substantial barrier for clinical researchers who often lack the resources and expertise to train neural networks. We present UniverSeg, an in-context learning method for solving unseen medical segmentation tasks. Given a new image to segment, and a set of image-label pairs that define the task, UniverSeg can produce accurate segmentation predictions with no additional training. We demonstrate that UniverSeg substantially outperforms existing methods in solving unseen segmentation tasks, and thoroughly analyze important aspects of our proposed data, training, and inference paradigms.