Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography
Cone-beam computed tomography (CBCT) is widely used in dental and maxillofacial imaging applications. However, CBCT suffers from shading artifacts owing to several factors, including photon scattering and data truncation. This paper presents a deep-learning-based method for eliminating the shading a...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2022-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/9722839/ |
_version_ | 1818370251556913152 |
---|---|
author | Hyoung Suk Park Kiwan Jeon Sang-Hwy Lee Jin Keun Seo |
author_facet | Hyoung Suk Park Kiwan Jeon Sang-Hwy Lee Jin Keun Seo |
author_sort | Hyoung Suk Park |
collection | DOAJ |
description | Cone-beam computed tomography (CBCT) is widely used in dental and maxillofacial imaging applications. However, CBCT suffers from shading artifacts owing to several factors, including photon scattering and data truncation. This paper presents a deep-learning-based method for eliminating the shading artifacts that interfere with the diagnostic and treatment processes. The proposed method involves a two-stage generative adversarial network (GAN)-based image-to-image translation, where it operates on unpaired CBCT and multidetector computed tomography (MDCT) images. The first stage uses a generic GAN along with the fidelity difference between the original CBCT and MDCT-like images generated by the network. Although this approach is generally effective for denoising, at times, it introduces additional artifacts that appear as bone-like structures in the output images. This is because the weak input fidelity between the two imaging modalities can make it difficult to preserve the morphological structures from complex shadowing artifacts. The second stage of the proposed model addresses this problem. In this stage, paired training data, excluding inappropriate data, were collected from the results obtained in the first stage. Subsequently, the fidelity-embedded GAN was retrained using the selected paired samples. The results obtained in this study reveal that the proposed approach substantially reduces the shadowing and secondary artifacts arising from incorrect data fidelity while preserving the morphological structures of the original CBCT image. In addition, the corrected image obtained using the proposed method facilitates accurate bone segmentation compared to the original and corrected CBCT images obtained using the unpaired method. |
first_indexed | 2024-12-13T23:36:46Z |
format | Article |
id | doaj.art-1401c0406f734e39af1c7cb28f25a00e |
institution | Directory Open Access Journal |
issn | 2169-3536 |
language | English |
last_indexed | 2024-12-13T23:36:46Z |
publishDate | 2022-01-01 |
publisher | IEEE |
record_format | Article |
series | IEEE Access |
spelling | doaj.art-1401c0406f734e39af1c7cb28f25a00e2022-12-21T23:27:17ZengIEEEIEEE Access2169-35362022-01-0110261402614810.1109/ACCESS.2022.31552039722839Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed TomographyHyoung Suk Park0https://orcid.org/0000-0003-0032-4630Kiwan Jeon1https://orcid.org/0000-0002-2460-7478Sang-Hwy Lee2Jin Keun Seo3https://orcid.org/0000-0002-6275-4938National Institute for Mathematical Sciences, Daejeon, South KoreaNational Institute for Mathematical Sciences, Daejeon, South KoreaDepartment of Oral and Maxillofacial Surgery, College of Dentistry, Oral Science Research Center, Yonsei University, Seoul, South KoreaSchool of Mathematics and Computing (Computational Science and Engineering), Yonsei University, Seoul, South KoreaCone-beam computed tomography (CBCT) is widely used in dental and maxillofacial imaging applications. However, CBCT suffers from shading artifacts owing to several factors, including photon scattering and data truncation. This paper presents a deep-learning-based method for eliminating the shading artifacts that interfere with the diagnostic and treatment processes. The proposed method involves a two-stage generative adversarial network (GAN)-based image-to-image translation, where it operates on unpaired CBCT and multidetector computed tomography (MDCT) images. The first stage uses a generic GAN along with the fidelity difference between the original CBCT and MDCT-like images generated by the network. Although this approach is generally effective for denoising, at times, it introduces additional artifacts that appear as bone-like structures in the output images. This is because the weak input fidelity between the two imaging modalities can make it difficult to preserve the morphological structures from complex shadowing artifacts. The second stage of the proposed model addresses this problem. In this stage, paired training data, excluding inappropriate data, were collected from the results obtained in the first stage. Subsequently, the fidelity-embedded GAN was retrained using the selected paired samples. The results obtained in this study reveal that the proposed approach substantially reduces the shadowing and secondary artifacts arising from incorrect data fidelity while preserving the morphological structures of the original CBCT image. In addition, the corrected image obtained using the proposed method facilitates accurate bone segmentation compared to the original and corrected CBCT images obtained using the unpaired method.https://ieeexplore.ieee.org/document/9722839/Computed tomographyshading correctionunpaired learninggenerative adversarial network |
spellingShingle | Hyoung Suk Park Kiwan Jeon Sang-Hwy Lee Jin Keun Seo Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography IEEE Access Computed tomography shading correction unpaired learning generative adversarial network |
title | Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography |
title_full | Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography |
title_fullStr | Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography |
title_full_unstemmed | Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography |
title_short | Unpaired-Paired Learning for Shading Correction in Cone-Beam Computed Tomography |
title_sort | unpaired paired learning for shading correction in cone beam computed tomography |
topic | Computed tomography shading correction unpaired learning generative adversarial network |
url | https://ieeexplore.ieee.org/document/9722839/ |
work_keys_str_mv | AT hyoungsukpark unpairedpairedlearningforshadingcorrectioninconebeamcomputedtomography AT kiwanjeon unpairedpairedlearningforshadingcorrectioninconebeamcomputedtomography AT sanghwylee unpairedpairedlearningforshadingcorrectioninconebeamcomputedtomography AT jinkeunseo unpairedpairedlearningforshadingcorrectioninconebeamcomputedtomography |