Knowledge-Based Scene Graph Generation with Visual Contextual Dependency

Scene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics...

Full description

Bibliographic Details
Main Authors: Lizong Zhang, Haojun Yin, Bei Hui, Sijuan Liu, Wei Zhang
Format: Article
Language:English
Published: MDPI AG 2022-07-01
Series:Mathematics
Subjects:
Online Access:https://www.mdpi.com/2227-7390/10/14/2525
_version_ 1827628039675052032
author Lizong Zhang
Haojun Yin
Bei Hui
Sijuan Liu
Wei Zhang
author_facet Lizong Zhang
Haojun Yin
Bei Hui
Sijuan Liu
Wei Zhang
author_sort Lizong Zhang
collection DOAJ
description Scene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics of external knowledge have not yet been fully utilized, and the combination of visual and auxiliary information can lead to visual dependencies, which impacts relationship prediction among objects. Therefore, we propose a novel knowledge-based model with adjustable visual contextual dependency. Our model has three key components. The first module extracts the visual features and bounding boxes in the input image. The second module uses two encoders to fully integrate visual information and external knowledge. Finally, visual context loss and visual relationship loss are introduced to adjust the visual dependency of the model. The difference between the initial prediction results and the visual dependency results is calculated to generate the dependency-corrected results. The proposed model can obtain better global and contextual information for predicting object relationships, and the visual dependencies can be adjusted through the two loss functions. The results of extensive experiments show that our model outperforms most existing methods.
first_indexed 2024-03-09T13:25:12Z
format Article
id doaj.art-4915cd16480245709ac3ecce9c745699
institution Directory Open Access Journal
issn 2227-7390
language English
last_indexed 2024-03-09T13:25:12Z
publishDate 2022-07-01
publisher MDPI AG
record_format Article
series Mathematics
spelling doaj.art-4915cd16480245709ac3ecce9c7456992023-11-30T21:24:27ZengMDPI AGMathematics2227-73902022-07-011014252510.3390/math10142525Knowledge-Based Scene Graph Generation with Visual Contextual DependencyLizong Zhang0Haojun Yin1Bei Hui2Sijuan Liu3Wei Zhang4School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaSchool of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaSchool of Information and Software Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaResearch Institute of Social Development, Southwestern University of Finance and Economics, Chengdu 611130, ChinaSchool of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu 611731, ChinaScene graph generation is the basis of various computer vision applications, including image retrieval, visual question answering, and image captioning. Previous studies have relied on visual features or incorporated auxiliary information to predict object relationships. However, the rich semantics of external knowledge have not yet been fully utilized, and the combination of visual and auxiliary information can lead to visual dependencies, which impacts relationship prediction among objects. Therefore, we propose a novel knowledge-based model with adjustable visual contextual dependency. Our model has three key components. The first module extracts the visual features and bounding boxes in the input image. The second module uses two encoders to fully integrate visual information and external knowledge. Finally, visual context loss and visual relationship loss are introduced to adjust the visual dependency of the model. The difference between the initial prediction results and the visual dependency results is calculated to generate the dependency-corrected results. The proposed model can obtain better global and contextual information for predicting object relationships, and the visual dependencies can be adjusted through the two loss functions. The results of extensive experiments show that our model outperforms most existing methods.https://www.mdpi.com/2227-7390/10/14/2525scene graph generationexternal knowledgecontext fusioncomputer visionvisual dependency constraint
spellingShingle Lizong Zhang
Haojun Yin
Bei Hui
Sijuan Liu
Wei Zhang
Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
Mathematics
scene graph generation
external knowledge
context fusion
computer vision
visual dependency constraint
title Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
title_full Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
title_fullStr Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
title_full_unstemmed Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
title_short Knowledge-Based Scene Graph Generation with Visual Contextual Dependency
title_sort knowledge based scene graph generation with visual contextual dependency
topic scene graph generation
external knowledge
context fusion
computer vision
visual dependency constraint
url https://www.mdpi.com/2227-7390/10/14/2525
work_keys_str_mv AT lizongzhang knowledgebasedscenegraphgenerationwithvisualcontextualdependency
AT haojunyin knowledgebasedscenegraphgenerationwithvisualcontextualdependency
AT beihui knowledgebasedscenegraphgenerationwithvisualcontextualdependency
AT sijuanliu knowledgebasedscenegraphgenerationwithvisualcontextualdependency
AT weizhang knowledgebasedscenegraphgenerationwithvisualcontextualdependency