Automated Image Annotation With Novel Features Based on Deep ResNet50-SLT

Due to their vast size, the growing number of digital images found in personal archives and on websites has become unmanageable, making it challenging to retrieve images from these large databases accurately. While these collections are popular due to their convenience, they often need to be equippe...

Full description

Bibliographic Details
Main Authors: Myasar Mundher Adnan, Mohd Shafry Mohd Rahim, Amjad Rehman Khan, Ahmed Alkhayyat, Faten S. Alamri, Tanzila Saba, Saeed Ali Bahaj
Format: Article
Language:English
Published: IEEE 2023-01-01
Series:IEEE Access
Subjects:
Online Access:https://ieeexplore.ieee.org/document/10098776/
_version_ 1797813976994676736
author Myasar Mundher Adnan
Mohd Shafry Mohd Rahim
Amjad Rehman Khan
Ahmed Alkhayyat
Faten S. Alamri
Tanzila Saba
Saeed Ali Bahaj
author_facet Myasar Mundher Adnan
Mohd Shafry Mohd Rahim
Amjad Rehman Khan
Ahmed Alkhayyat
Faten S. Alamri
Tanzila Saba
Saeed Ali Bahaj
author_sort Myasar Mundher Adnan
collection DOAJ
description Due to their vast size, the growing number of digital images found in personal archives and on websites has become unmanageable, making it challenging to retrieve images from these large databases accurately. While these collections are popular due to their convenience, they often need to be equipped with proper indexing information, making it difficult for users to find what they need. One of the most significant challenges in computer vision and multimedia is image annotation, which involves labeling images with descriptive keywords. However, computers need to possess the capability to understand the essence of images in the same way that humans do, and people can only identify images based on their visual attributes rather than their deeper semantic meaning. Therefore, image annotation requires keywords to effectively communicate the contents of an image to a computer system. However, raw pixels in an image need to provide more information to generate semantic concepts, making image annotation a complex task. Unlike text annotation, where the dictionary linking words to semantics is well established, image annotation lacks a clear definition of “words” or “sentences” that can be associated with the meaning of the image, known as the semantic gap. To address this challenge, this study aimed to characterize image content meaningfully to make information retrieval easier. An improved automatic image annotation (AIA) system was proposed to bridge the semantic gap between low-level computer features and human interpretation of images by assigning one or multiple labels to images. The proposed AIA system can convert raw image pixels into semantic-level concepts, providing a clearer representation of the image content. The study combined the ResNet50 and slantlet transform with word2vec and principal component analysis with t-distributed stochastic neighbor embedding to balance precision and recall. This allowed the researchers to determine the optimal model for the proposed ResNet50-SLT AIA framework. A Word2vec model with ResNet50-SLT was used with principal component analysis and t-distributed stochastic neighbor embedding to improve IA prediction accuracy. The distributed representation approach involved encoding and storing information about image features. The proposed AIA system utilized seq2seq to generate sentences depending on feature vectors. The system was implemented on the most popular datasets (Flickr8k, Corel-5k, ESP-Game). The results showed that the newly developed AIA scheme overcame the computational time complexity associated with most existing image annotation models during the training phase for large datasets. The performance evaluation of the AIA scheme showed its excellent flexibility of annotation, improved accuracy, and reduced computational costs, thus outperforming the existing state-of-the-art methods. In conclusion, this AIA framework can provide immense benefits in accurately selecting and extracting image features and easily retrieving images from large databases. The extracted features can effectively be used to represent the image, thus accelerating the annotation process and minimizing the computational complexity.
first_indexed 2024-03-13T08:00:42Z
format Article
id doaj.art-fa2bb2214b4b4bc887943685473264e7
institution Directory Open Access Journal
issn 2169-3536
language English
last_indexed 2024-03-13T08:00:42Z
publishDate 2023-01-01
publisher IEEE
record_format Article
series IEEE Access
spelling doaj.art-fa2bb2214b4b4bc887943685473264e72023-06-01T23:00:23ZengIEEEIEEE Access2169-35362023-01-0111402584027710.1109/ACCESS.2023.326629610098776Automated Image Annotation With Novel Features Based on Deep ResNet50-SLTMyasar Mundher Adnan0https://orcid.org/0000-0003-3260-9171Mohd Shafry Mohd Rahim1https://orcid.org/0000-0002-5074-2008Amjad Rehman Khan2https://orcid.org/0000-0002-0101-0329Ahmed Alkhayyat3https://orcid.org/0000-0002-0962-3453Faten S. Alamri4Tanzila Saba5https://orcid.org/0000-0003-3138-3801Saeed Ali Bahaj6https://orcid.org/0000-0003-3406-4320Faculty of Engineering, School of Computing, Universiti Teknologi Malaysia, Skudai, Johor, MalaysiaSchool of Computing, Universiti Teknologi Malaysia, Skudai, Johor, MalaysiaArtificial Intelligence and Data Analytics Laboratory, CCIS, Prince Sultan University, Riyadh, Saudi ArabiaSchool of Computing, Universiti Teknologi Malaysia, Skudai, Johor, MalaysiaDepartment of Mathematical Sciences, College of Science, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi ArabiaArtificial Intelligence and Data Analytics Laboratory, CCIS, Prince Sultan University, Riyadh, Saudi ArabiaMIS Department, College of Business Administration, Prince Sattam Bin Abdulaziz University, Alkharj, Saudi ArabiaDue to their vast size, the growing number of digital images found in personal archives and on websites has become unmanageable, making it challenging to retrieve images from these large databases accurately. While these collections are popular due to their convenience, they often need to be equipped with proper indexing information, making it difficult for users to find what they need. One of the most significant challenges in computer vision and multimedia is image annotation, which involves labeling images with descriptive keywords. However, computers need to possess the capability to understand the essence of images in the same way that humans do, and people can only identify images based on their visual attributes rather than their deeper semantic meaning. Therefore, image annotation requires keywords to effectively communicate the contents of an image to a computer system. However, raw pixels in an image need to provide more information to generate semantic concepts, making image annotation a complex task. Unlike text annotation, where the dictionary linking words to semantics is well established, image annotation lacks a clear definition of “words” or “sentences” that can be associated with the meaning of the image, known as the semantic gap. To address this challenge, this study aimed to characterize image content meaningfully to make information retrieval easier. An improved automatic image annotation (AIA) system was proposed to bridge the semantic gap between low-level computer features and human interpretation of images by assigning one or multiple labels to images. The proposed AIA system can convert raw image pixels into semantic-level concepts, providing a clearer representation of the image content. The study combined the ResNet50 and slantlet transform with word2vec and principal component analysis with t-distributed stochastic neighbor embedding to balance precision and recall. This allowed the researchers to determine the optimal model for the proposed ResNet50-SLT AIA framework. A Word2vec model with ResNet50-SLT was used with principal component analysis and t-distributed stochastic neighbor embedding to improve IA prediction accuracy. The distributed representation approach involved encoding and storing information about image features. The proposed AIA system utilized seq2seq to generate sentences depending on feature vectors. The system was implemented on the most popular datasets (Flickr8k, Corel-5k, ESP-Game). The results showed that the newly developed AIA scheme overcame the computational time complexity associated with most existing image annotation models during the training phase for large datasets. The performance evaluation of the AIA scheme showed its excellent flexibility of annotation, improved accuracy, and reduced computational costs, thus outperforming the existing state-of-the-art methods. In conclusion, this AIA framework can provide immense benefits in accurately selecting and extracting image features and easily retrieving images from large databases. The extracted features can effectively be used to represent the image, thus accelerating the annotation process and minimizing the computational complexity.https://ieeexplore.ieee.org/document/10098776/Automatic image annotationdeep learningfeatures extractiondigital learningSlantlet transformtechnological development
spellingShingle Myasar Mundher Adnan
Mohd Shafry Mohd Rahim
Amjad Rehman Khan
Ahmed Alkhayyat
Faten S. Alamri
Tanzila Saba
Saeed Ali Bahaj
Automated Image Annotation With Novel Features Based on Deep ResNet50-SLT
IEEE Access
Automatic image annotation
deep learning
features extraction
digital learning
Slantlet transform
technological development
title Automated Image Annotation With Novel Features Based on Deep ResNet50-SLT
title_full Automated Image Annotation With Novel Features Based on Deep ResNet50-SLT
title_fullStr Automated Image Annotation With Novel Features Based on Deep ResNet50-SLT
title_full_unstemmed Automated Image Annotation With Novel Features Based on Deep ResNet50-SLT
title_short Automated Image Annotation With Novel Features Based on Deep ResNet50-SLT
title_sort automated image annotation with novel features based on deep resnet50 slt
topic Automatic image annotation
deep learning
features extraction
digital learning
Slantlet transform
technological development
url https://ieeexplore.ieee.org/document/10098776/
work_keys_str_mv AT myasarmundheradnan automatedimageannotationwithnovelfeaturesbasedondeepresnet50slt
AT mohdshafrymohdrahim automatedimageannotationwithnovelfeaturesbasedondeepresnet50slt
AT amjadrehmankhan automatedimageannotationwithnovelfeaturesbasedondeepresnet50slt
AT ahmedalkhayyat automatedimageannotationwithnovelfeaturesbasedondeepresnet50slt
AT fatensalamri automatedimageannotationwithnovelfeaturesbasedondeepresnet50slt
AT tanzilasaba automatedimageannotationwithnovelfeaturesbasedondeepresnet50slt
AT saeedalibahaj automatedimageannotationwithnovelfeaturesbasedondeepresnet50slt