Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images

The second most frequent malignancy in women worldwide is cervical cancer. In the transformation(transitional) zone, which is a region of the cervix, columnar cells are continuously converting into squamous cells. The most typical location on the cervix for the development of aberrant cells is the t...

Full description

Bibliographic Details
Main Authors: Srikanta Dash, Prabira Kumar Sethy, Santi Kumari Behera
Format: Article
Language:English
Published: SAGE Publishing 2023-03-01
Series:Cancer Informatics
Online Access:https://doi.org/10.1177/11769351231161477
_version_ 1797857521342349312
author Srikanta Dash
Prabira Kumar Sethy
Santi Kumari Behera
author_facet Srikanta Dash
Prabira Kumar Sethy
Santi Kumari Behera
author_sort Srikanta Dash
collection DOAJ
description The second most frequent malignancy in women worldwide is cervical cancer. In the transformation(transitional) zone, which is a region of the cervix, columnar cells are continuously converting into squamous cells. The most typical location on the cervix for the development of aberrant cells is the transformation zone, a region of transforming cells. This article suggests a 2-phase method that includes segmenting and classifying the transformation zone to identify the type of cervical cancer. In the initial stage, the transformation zone is segmented from the colposcopy images. The segmented images are then subjected to the augmentation process and identified with the improved inception-resnet-v2. Here, multi-scale feature fusion framework that utilizes 3 × 3 convolution kernels from Reduction-A and Reduction-B of inception-resnet-v2 is introduced. The feature extracted from Reduction-A and Reduction -B is concatenated and fed to SVM for classification. This way, the model combines the benefits of residual networks and Inception convolution, increasing network width and resolving the deep network’s training issue. The network can extract several scales of contextual information due to the multi-scale feature fusion, which increases accuracy. The experimental results reveal 81.24% accuracy, 81.24% sensitivity, 90.62% specificity, 87.52% precision, 9.38% FPR, and 81.68% F1 score, 75.27% MCC, and 57.79% Kappa coefficient.
first_indexed 2024-04-09T20:59:10Z
format Article
id doaj.art-eb70a68719914db0ac2a1d873936b19f
institution Directory Open Access Journal
issn 1176-9351
language English
last_indexed 2024-04-09T20:59:10Z
publishDate 2023-03-01
publisher SAGE Publishing
record_format Article
series Cancer Informatics
spelling doaj.art-eb70a68719914db0ac2a1d873936b19f2023-03-29T14:05:04ZengSAGE PublishingCancer Informatics1176-93512023-03-012210.1177/11769351231161477Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy ImagesSrikanta Dash0Prabira Kumar Sethy1Santi Kumari Behera2Department of Electronics, Sambalpur University, Sambalpur, Odisha, IndiaDepartment of Electronics, Sambalpur University, Sambalpur, Odisha, IndiaDepartment of CSE, VSSUT Burla, Sambalpur, Odisha, IndiaThe second most frequent malignancy in women worldwide is cervical cancer. In the transformation(transitional) zone, which is a region of the cervix, columnar cells are continuously converting into squamous cells. The most typical location on the cervix for the development of aberrant cells is the transformation zone, a region of transforming cells. This article suggests a 2-phase method that includes segmenting and classifying the transformation zone to identify the type of cervical cancer. In the initial stage, the transformation zone is segmented from the colposcopy images. The segmented images are then subjected to the augmentation process and identified with the improved inception-resnet-v2. Here, multi-scale feature fusion framework that utilizes 3 × 3 convolution kernels from Reduction-A and Reduction-B of inception-resnet-v2 is introduced. The feature extracted from Reduction-A and Reduction -B is concatenated and fed to SVM for classification. This way, the model combines the benefits of residual networks and Inception convolution, increasing network width and resolving the deep network’s training issue. The network can extract several scales of contextual information due to the multi-scale feature fusion, which increases accuracy. The experimental results reveal 81.24% accuracy, 81.24% sensitivity, 90.62% specificity, 87.52% precision, 9.38% FPR, and 81.68% F1 score, 75.27% MCC, and 57.79% Kappa coefficient.https://doi.org/10.1177/11769351231161477
spellingShingle Srikanta Dash
Prabira Kumar Sethy
Santi Kumari Behera
Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images
Cancer Informatics
title Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images
title_full Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images
title_fullStr Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images
title_full_unstemmed Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images
title_short Cervical Transformation Zone Segmentation and Classification based on Improved Inception-ResNet-V2 Using Colposcopy Images
title_sort cervical transformation zone segmentation and classification based on improved inception resnet v2 using colposcopy images
url https://doi.org/10.1177/11769351231161477
work_keys_str_mv AT srikantadash cervicaltransformationzonesegmentationandclassificationbasedonimprovedinceptionresnetv2usingcolposcopyimages
AT prabirakumarsethy cervicaltransformationzonesegmentationandclassificationbasedonimprovedinceptionresnetv2usingcolposcopyimages
AT santikumaribehera cervicaltransformationzonesegmentationandclassificationbasedonimprovedinceptionresnetv2usingcolposcopyimages