Bag of words KAZE (BoWK) with two‐step classification for high‐resolution remote sensing images

The bag‐of‐words (BoW) model has been widely used for scene classification in recent state‐of‐the‐art methods. However, inter‐class similarity among scene categories and very high spatial resolution imagery makes its performance limited in the remote‐sensing domain. Therefore, this research presents...

Full description

Bibliographic Details
Main Authors: Usman Muhammad, Weiqiang Wang, Abdenour Hadid, Shahbaz Pervez
Format: Article
Language:English
Published: Wiley 2019-06-01
Series:IET Computer Vision
Subjects:
Online Access:https://doi.org/10.1049/iet-cvi.2018.5069
Description
Summary:The bag‐of‐words (BoW) model has been widely used for scene classification in recent state‐of‐the‐art methods. However, inter‐class similarity among scene categories and very high spatial resolution imagery makes its performance limited in the remote‐sensing domain. Therefore, this research presents a new KAZE‐based image descriptor that makes use of the BoW approach to substantially increase classification performance. Specifically, a novel multi‐neighbourhood KAZE is proposed for small image patches. Secondly, the spatial pyramid matching and BoW representation can be adopted to use the extracted features and make an innovative BoW KAZE (BoWK) descriptor. Third, two bags of multi‐neighbourhood KAZE features are selected in which each bag is regarded as separated feature descriptors. Next, canonical correlation analysis is introduced as a feature fusion strategy to further refine the BOWK features, which allows a more effective and robust fusion approach than the traditional feature fusion strategies. Experiments on three challenging remote‐sensing data sets show that the proposed BoWK descriptor not only surpasses the conventional KAZE descriptor but also yields significantly higher classification performance than the state‐of‐the‐art methods used now. Moreover, the proposed BoWK approach produces rich informative features to describe the scene images with low‐computational cost and a much lower dimension.
ISSN:1751-9632
1751-9640