Video-based face recognition in color space by graph-based discriminant analysis

Video-based face recognition has attracted significant attention in many applications such as media technology, network security, human-machine interfaces, and automatic access control system in the past decade. The usual way for face recognition is based upon the grayscale image produced by combini...

Full description

Bibliographic Details
Main Authors: S. Shafeipour Yourdeshahi, H. Seyedarabi, A. Aghagolzadeh
Format: Article
Language:English
Published: Shahrood University of Technology 2016-07-01
Series:Journal of Artificial Intelligence and Data Mining
Subjects:
Online Access:http://jad.shahroodut.ac.ir/article_639_28f38cbd9aef5127130e677338735dd4.pdf
Description
Summary:Video-based face recognition has attracted significant attention in many applications such as media technology, network security, human-machine interfaces, and automatic access control system in the past decade. The usual way for face recognition is based upon the grayscale image produced by combining the three color component images. In this work, we consider grayscale image as well as color space in the recognition process. For key frame extractions from a video sequence, the input video is converted to a number of clusters, each of which acts as a linear subspace. The center of each cluster is considered as the cluster representative. Also in this work, for comparing the key frames, the three popular color spaces RGB, YCbCr, and HSV are used for mathematical representation, and the graph-based discriminant analysis is applied for the recognition process. It is also shown that by introducing the intra-class and inter-class similarity graphs to the color space, the problem is changed to determining the color component combination vector and mapping matrix. We introduce an iterative algorithm to simultaneously determine the optimum above vector and matrix. Finally, the results of the three color spaces and grayscale image are compared with those obtained from other available methods. Our experimental results demonstrate the effectiveness of the proposed approach.
ISSN:2322-5211
2322-4444