Fusing the facial temporal information in videos for face recognition
Face recognition is a challenging and innovative research topic in the present sophisticated world of visual technology. In most of the existing approaches, the face recognition from the still images is affected by intra‐personal variations such as pose, illumination and expression which degrade the...
Main Authors: | , |
---|---|
Format: | Article |
Language: | English |
Published: |
Wiley
2016-10-01
|
Series: | IET Computer Vision |
Subjects: | |
Online Access: | https://doi.org/10.1049/iet-cvi.2015.0394 |
_version_ | 1797685043742638080 |
---|---|
author | Ithayarani Panner Selvam Muneeswaran Karruppiah |
author_facet | Ithayarani Panner Selvam Muneeswaran Karruppiah |
author_sort | Ithayarani Panner Selvam |
collection | DOAJ |
description | Face recognition is a challenging and innovative research topic in the present sophisticated world of visual technology. In most of the existing approaches, the face recognition from the still images is affected by intra‐personal variations such as pose, illumination and expression which degrade the performance. This study proposes a novel approach for video‐based face recognition due to the availability of large intra‐personal variations. The feature vector based on the normalised semi‐local binary patterns is obtained for the face region. Each frame is matched with the signature of the faces in the database and a rank list is formed. Each ranked list is clustered and its reliability is analysed for re‐ranking. To characterise an individual in a video, multiple re‐ranked lists across the multiple video frames are fused to form a video signature. This video signature embeds diverse intra‐personal and temporal variations, which facilitates in matching two videos with large variations. For matching two videos, their video signatures are compared using Kendall‐Tau distance. The developed methods are deployed on the YouTube and ChokePoint videos, and they exhibit significant performance improvement owing to their approach when compared with the existing techniques. |
first_indexed | 2024-03-12T00:39:39Z |
format | Article |
id | doaj.art-dcd51fe233854a81b685868f6f0b1e47 |
institution | Directory Open Access Journal |
issn | 1751-9632 1751-9640 |
language | English |
last_indexed | 2024-03-12T00:39:39Z |
publishDate | 2016-10-01 |
publisher | Wiley |
record_format | Article |
series | IET Computer Vision |
spelling | doaj.art-dcd51fe233854a81b685868f6f0b1e472023-09-15T09:05:18ZengWileyIET Computer Vision1751-96321751-96402016-10-0110765065910.1049/iet-cvi.2015.0394Fusing the facial temporal information in videos for face recognitionIthayarani Panner Selvam0Muneeswaran Karruppiah1Department of Computer Science and EngineeringMepco Schlenk Engineering CollegeMepco Nagar, Amathur(Post)Sivakasi626 005IndiaDepartment of Computer Science and EngineeringMepco Schlenk Engineering CollegeMepco Nagar, Amathur(Post)Sivakasi626 005IndiaFace recognition is a challenging and innovative research topic in the present sophisticated world of visual technology. In most of the existing approaches, the face recognition from the still images is affected by intra‐personal variations such as pose, illumination and expression which degrade the performance. This study proposes a novel approach for video‐based face recognition due to the availability of large intra‐personal variations. The feature vector based on the normalised semi‐local binary patterns is obtained for the face region. Each frame is matched with the signature of the faces in the database and a rank list is formed. Each ranked list is clustered and its reliability is analysed for re‐ranking. To characterise an individual in a video, multiple re‐ranked lists across the multiple video frames are fused to form a video signature. This video signature embeds diverse intra‐personal and temporal variations, which facilitates in matching two videos with large variations. For matching two videos, their video signatures are compared using Kendall‐Tau distance. The developed methods are deployed on the YouTube and ChokePoint videos, and they exhibit significant performance improvement owing to their approach when compared with the existing techniques.https://doi.org/10.1049/iet-cvi.2015.0394facial temporal informationvideo signal processingface recognitionfeature vectornormalised semilocal binary patternsface region |
spellingShingle | Ithayarani Panner Selvam Muneeswaran Karruppiah Fusing the facial temporal information in videos for face recognition IET Computer Vision facial temporal information video signal processing face recognition feature vector normalised semilocal binary patterns face region |
title | Fusing the facial temporal information in videos for face recognition |
title_full | Fusing the facial temporal information in videos for face recognition |
title_fullStr | Fusing the facial temporal information in videos for face recognition |
title_full_unstemmed | Fusing the facial temporal information in videos for face recognition |
title_short | Fusing the facial temporal information in videos for face recognition |
title_sort | fusing the facial temporal information in videos for face recognition |
topic | facial temporal information video signal processing face recognition feature vector normalised semilocal binary patterns face region |
url | https://doi.org/10.1049/iet-cvi.2015.0394 |
work_keys_str_mv | AT ithayaranipannerselvam fusingthefacialtemporalinformationinvideosforfacerecognition AT muneeswarankarruppiah fusingthefacialtemporalinformationinvideosforfacerecognition |