Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging

Complex intracellular organizations are commonly represented by dividing the metabolic process of cells into different organelles. Therefore, identifying sub-cellular organelle architecture is significant for understanding intracellular structural properties, specific functions, and biological proce...

Full description

Bibliographic Details
Main Authors: Zhihao Wei, Xi Liu, Ruiqing Yan, Guocheng Sun, Weiyong Yu, Qiang Liu, Qianjin Guo
Format: Article
Language:English
Published: Frontiers Media S.A. 2022-10-01
Series:Frontiers in Genetics
Subjects:
Online Access:https://www.frontiersin.org/articles/10.3389/fgene.2022.1002327/full
_version_ 1811338079478218752
author Zhihao Wei
Xi Liu
Ruiqing Yan
Guocheng Sun
Guocheng Sun
Weiyong Yu
Qiang Liu
Qianjin Guo
Qianjin Guo
author_facet Zhihao Wei
Xi Liu
Ruiqing Yan
Guocheng Sun
Guocheng Sun
Weiyong Yu
Qiang Liu
Qianjin Guo
Qianjin Guo
author_sort Zhihao Wei
collection DOAJ
description Complex intracellular organizations are commonly represented by dividing the metabolic process of cells into different organelles. Therefore, identifying sub-cellular organelle architecture is significant for understanding intracellular structural properties, specific functions, and biological processes in cells. However, the discrimination of these structures in the natural organizational environment and their functional consequences are not clear. In this article, we propose a new pixel-level multimodal fusion (PLMF) deep network which can be used to predict the location of cellular organelle using label-free cell optical microscopy images followed by deep-learning-based automated image denoising. It provides valuable insights that can be of tremendous help in improving the specificity of label-free cell optical microscopy by using the Transformer–Unet network to predict the ground truth imaging which corresponds to different sub-cellular organelle architectures. The new prediction method proposed in this article combines the advantages of a transformer’s global prediction and CNN’s local detail analytic ability of background features for label-free cell optical microscopy images, so as to improve the prediction accuracy. Our experimental results showed that the PLMF network can achieve over 0.91 Pearson’s correlation coefficient (PCC) correlation between estimated and true fractions on lung cancer cell-imaging datasets. In addition, we applied the PLMF network method on the cell images for label-free prediction of several different subcellular components simultaneously, rather than using several fluorescent labels. These results open up a new way for the time-resolved study of subcellular components in different cells, especially for cancer cells.
first_indexed 2024-04-13T18:04:33Z
format Article
id doaj.art-6989db708c764f7e939f7403766eae6f
institution Directory Open Access Journal
issn 1664-8021
language English
last_indexed 2024-04-13T18:04:33Z
publishDate 2022-10-01
publisher Frontiers Media S.A.
record_format Article
series Frontiers in Genetics
spelling doaj.art-6989db708c764f7e939f7403766eae6f2022-12-22T02:36:06ZengFrontiers Media S.A.Frontiers in Genetics1664-80212022-10-011310.3389/fgene.2022.10023271002327Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imagingZhihao Wei0Xi Liu1Ruiqing Yan2Guocheng Sun3Guocheng Sun4Weiyong Yu5Qiang Liu6Qianjin Guo7Qianjin Guo8Academy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, ChinaAcademy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, ChinaAcademy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, ChinaAcademy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, ChinaSchool of Mechanical Engineering & Hydrogen Energy Research Centre, Beijing Institute of Petrochemical Technology, Beijing, ChinaAcademy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, ChinaAcademy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, ChinaAcademy of Artificial Intelligence, Beijing Institute of Petrochemical Technology, Beijing, ChinaSchool of Mechanical Engineering & Hydrogen Energy Research Centre, Beijing Institute of Petrochemical Technology, Beijing, ChinaComplex intracellular organizations are commonly represented by dividing the metabolic process of cells into different organelles. Therefore, identifying sub-cellular organelle architecture is significant for understanding intracellular structural properties, specific functions, and biological processes in cells. However, the discrimination of these structures in the natural organizational environment and their functional consequences are not clear. In this article, we propose a new pixel-level multimodal fusion (PLMF) deep network which can be used to predict the location of cellular organelle using label-free cell optical microscopy images followed by deep-learning-based automated image denoising. It provides valuable insights that can be of tremendous help in improving the specificity of label-free cell optical microscopy by using the Transformer–Unet network to predict the ground truth imaging which corresponds to different sub-cellular organelle architectures. The new prediction method proposed in this article combines the advantages of a transformer’s global prediction and CNN’s local detail analytic ability of background features for label-free cell optical microscopy images, so as to improve the prediction accuracy. Our experimental results showed that the PLMF network can achieve over 0.91 Pearson’s correlation coefficient (PCC) correlation between estimated and true fractions on lung cancer cell-imaging datasets. In addition, we applied the PLMF network method on the cell images for label-free prediction of several different subcellular components simultaneously, rather than using several fluorescent labels. These results open up a new way for the time-resolved study of subcellular components in different cells, especially for cancer cells.https://www.frontiersin.org/articles/10.3389/fgene.2022.1002327/fulllabel-free live cell imagingprotein subcellular localizationnon-linear optical microscopyTransformer–Unet networkdeep learning
spellingShingle Zhihao Wei
Xi Liu
Ruiqing Yan
Guocheng Sun
Guocheng Sun
Weiyong Yu
Qiang Liu
Qianjin Guo
Qianjin Guo
Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
Frontiers in Genetics
label-free live cell imaging
protein subcellular localization
non-linear optical microscopy
Transformer–Unet network
deep learning
title Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_full Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_fullStr Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_full_unstemmed Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_short Pixel-level multimodal fusion deep networks for predicting subcellular organelle localization from label-free live-cell imaging
title_sort pixel level multimodal fusion deep networks for predicting subcellular organelle localization from label free live cell imaging
topic label-free live cell imaging
protein subcellular localization
non-linear optical microscopy
Transformer–Unet network
deep learning
url https://www.frontiersin.org/articles/10.3389/fgene.2022.1002327/full
work_keys_str_mv AT zhihaowei pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT xiliu pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT ruiqingyan pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT guochengsun pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT guochengsun pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT weiyongyu pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT qiangliu pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT qianjinguo pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging
AT qianjinguo pixellevelmultimodalfusiondeepnetworksforpredictingsubcellularorganellelocalizationfromlabelfreelivecellimaging