Fast MRI reconstruction using StrainNet with dual-domain loss on spatial and frequency spaces

One of the main challenges to obtain a high throughput in the MRI process is a slow signal acquisition. This process could be improved using a parallel imaging technique, where fewer raw data with multiple radio frequency (RF) coils are acquired simultaneously to reconstruct a final MR image. Nowada...

Full description

Bibliographic Details
Main Authors: Worapan Kusakunniran, Sarattha Karnjanapreechakorn, Thanongchai Siriapisith, Pairash Saiviroonporn
Format: Article
Language:English
Published: Elsevier 2023-05-01
Series:Intelligent Systems with Applications
Subjects:
Online Access:http://www.sciencedirect.com/science/article/pii/S2667305323000285
Description
Summary:One of the main challenges to obtain a high throughput in the MRI process is a slow signal acquisition. This process could be improved using a parallel imaging technique, where fewer raw data with multiple radio frequency (RF) coils are acquired simultaneously to reconstruct a final MR image. Nowadays, all multi-coil MRI machines have a parallel imaging technique for the image reconstruction. However, the parallel imaging still cannot accelerate sufficiently to reduce the overall acquisition time. In another way, this paper proposes a solution relying on a deep convolution neural network (CNN) to generate high-quality reconstruction MR images with higher acceleration factors. The proposed method, called StrainNet, performs the reconstructions by encoding the under-sampled data (i.e., for the speeding up process) into high-level features. Then the important part of the network, called Strainer, is applied to discard irrelevance information, and decodes remaining features back to reconstruct MR images. The proposed network could be trained end-to-end with a newly presented loss function, Dual-Domain Loss (DDL), combining both spatial and frequency losses. The experimental results are based on the fastMRI dataset and show that StrainNet outperforms the competing methods for both 4- and 8-fold accelerations.
ISSN:2667-3053