Semantic gap in CBIR: automatic objects spatial relationships semantic extraction and representation

The explosive growth of image data leads to the need of research and development of Image retrieval. Image retrieval researches are moving from keyword, to low level features and to semantic features. Drive towards semantic features is due to the problem of the keywords which can be very subjective...

Full description

Bibliographic Details
Main Authors: Hui, Hui Wang, Mohamad, Dzulkifli, Ismail, N. A.
Format: Article
Language:English
Published: Computer Science Journals 2010
Subjects:
Online Access:http://eprints.utm.my/38414/2/IJIP-189.pdf
Description
Summary:The explosive growth of image data leads to the need of research and development of Image retrieval. Image retrieval researches are moving from keyword, to low level features and to semantic features. Drive towards semantic features is due to the problem of the keywords which can be very subjective and time consuming while low level features cannot always describe high level concepts in the users’ mind. This paper is proposed a novel technique for objects spatial relationships semantics extraction and representation among objects exists in images. All objects are identified based on low level features extraction integrated with proposed line detection techniques. Objects are represented using a Minimum Bound Region (MBR) with a reference coordinate. The reference coordinate is used to compute the spatial relation among objects. There are 8 spatial relationship concepts are determined: “Front”, “Back”, “Right”, “Left”, “Right-Front”, “Left-Front”, “Right-Back”, “Left-Back” concept. The user query in text form is automatically translated to semantic meaning and representation. Besides, the image similarity of objects spatial relationships semantic has been proposed.