Recognizing Brain Regions in 2D Images from Brain Tissue

Often, the first step in neuroimaging research is understanding which anatomical structures are present in an image. Structural MRI provides a clear, high-resolution visualization of the anatomy of the brain, capturing physical characteristics like the size and shape of different regions of the brai...

Full description

Bibliographic Details
Main Author: Lohawala, Sabeen Imtiyaz
Other Authors: Ghosh, Satrajit
Format: Thesis
Published: Massachusetts Institute of Technology 2024
Online Access:https://hdl.handle.net/1721.1/156817
Description
Summary:Often, the first step in neuroimaging research is understanding which anatomical structures are present in an image. Structural MRI provides a clear, high-resolution visualization of the anatomy of the brain, capturing physical characteristics like the size and shape of different regions of the brain or the presence of abnormalities such as tumors. Whereas sMRI are more commonly taken in vivo, the neuropathology of many neurodegenerative disorders, like Alzheimer’s, requires analysis of the brain post-mortem through techniques like brain dissection, necessitating the use of other imaging modalities. Various tools and deep learning models have been developed to automatically identify different anatomical structures in 3D MRI volumes. However, the only method that exists to segment the anatomical structures in 2D brain slices, whether they be 2D slices extracted from an MRI or photographs of slices from a physically dissected brain, is manually labeling by a trained neuroanatomist, which is costly, resource-intensive, and time-consuming. In this project, we develop a new deep learning method to automatically segment 50 different regions in 2D photographs of the brain. Because a supervised image and segmentation map dataset does not exist for the photographs, we train the state-of-the-art SegFormer model on a supervised dataset of 2D MRI slices. We employ multiple data augmentation techniques to increase the variability of the training data to more closely resemble the variability seen in brain photographs, so that the model is robust enough to segment the anatomical regions in brain photographs. In this project, the SegFormer model achieved test dice scores between 0.6-0.75 on the segmentation of 50 different anatomical regions in 2D MRI slices, depending on which augmentations were incorporated during training. Additionally, the project demonstrated that incorporating complex augmentations that forced the model to learn the segmentation task with reduced contextual information as well as those that decoupled the tissue and background by manipulating them independently helped improve the robustness of the model, allowing it to better segment 2D photographs of the brain. Although there is much room for improvement, this project provides a set of techniques that can be extended to further improve the model’s robustness so that it can be applied to other imaging modalities as well in the future.