A generative model for depth-based robust 3D facial pose tracking

We consider the problem of depth-based robust 3D facial pose tracking under unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Unlike the previous depth-based discriminative or data-driven methods that require sophisticated training or manual intervention, we p...

Full description

Bibliographic Details
Main Authors: Sheng, Lu, Cai, Jianfei, Cham, Tat-Jen, Pavlovic, Vladimir, Ngan, King Ngi
Other Authors: School of Computer Science and Engineering
Format: Conference Paper
Language:English
Published: 2020
Subjects:
Online Access:https://hdl.handle.net/10356/138494
_version_ 1826126653586145280
author Sheng, Lu
Cai, Jianfei
Cham, Tat-Jen
Pavlovic, Vladimir
Ngan, King Ngi
author2 School of Computer Science and Engineering
author_facet School of Computer Science and Engineering
Sheng, Lu
Cai, Jianfei
Cham, Tat-Jen
Pavlovic, Vladimir
Ngan, King Ngi
author_sort Sheng, Lu
collection NTU
description We consider the problem of depth-based robust 3D facial pose tracking under unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Unlike the previous depth-based discriminative or data-driven methods that require sophisticated training or manual intervention, we propose a generative framework that unifies pose tracking and face model adaptation on-the-fly. Particularly, we propose a statistical 3D face model that owns the flexibility to generate and predict the distribution and uncertainty underlying the face model. Moreover, unlike prior arts employing the ICP-based facial pose estimation, we propose a ray visibility constraint that regularizes the pose based on the face models visibility against the input point cloud, which augments the robustness against the occlusions. The experimental results on Biwi and ICT-3DHP datasets reveal that the proposed framework is effective and outperforms the state-of-the-art depth-based methods.
first_indexed 2024-10-01T06:56:12Z
format Conference Paper
id ntu-10356/138494
institution Nanyang Technological University
language English
last_indexed 2024-10-01T06:56:12Z
publishDate 2020
record_format dspace
spelling ntu-10356/1384942020-09-26T21:53:13Z A generative model for depth-based robust 3D facial pose tracking Sheng, Lu Cai, Jianfei Cham, Tat-Jen Pavlovic, Vladimir Ngan, King Ngi School of Computer Science and Engineering 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Institute for Media Innovation (IMI) Engineering::Computer science and engineering Computer Vision Face Recognition We consider the problem of depth-based robust 3D facial pose tracking under unconstrained scenarios with heavy occlusions and arbitrary facial expression variations. Unlike the previous depth-based discriminative or data-driven methods that require sophisticated training or manual intervention, we propose a generative framework that unifies pose tracking and face model adaptation on-the-fly. Particularly, we propose a statistical 3D face model that owns the flexibility to generate and predict the distribution and uncertainty underlying the face model. Moreover, unlike prior arts employing the ICP-based facial pose estimation, we propose a ray visibility constraint that regularizes the pose based on the face models visibility against the input point cloud, which augments the robustness against the occlusions. The experimental results on Biwi and ICT-3DHP datasets reveal that the proposed framework is effective and outperforms the state-of-the-art depth-based methods. NRF (Natl Research Foundation, S’pore) MOE (Min. of Education, S’pore) Accepted version 2020-05-06T12:43:44Z 2020-05-06T12:43:44Z 2017 Conference Paper Sheng, L., Cai, J., Cham, T.-J., Pavlovic, V., & Ngan, K. N. (2017). A generative model for depth-based robust 3D facial pose tracking. 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 4598-4607. doi:10.1109/CVPR.2017.489 978-1-5386-0458-8 https://hdl.handle.net/10356/138494 10.1109/CVPR.2017.489 2-s2.0-85035237020 4598 4607 en © 2017 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. The published version is available at: https://doi.org/10.1109/CVPR.2017.489. application/pdf
spellingShingle Engineering::Computer science and engineering
Computer Vision
Face Recognition
Sheng, Lu
Cai, Jianfei
Cham, Tat-Jen
Pavlovic, Vladimir
Ngan, King Ngi
A generative model for depth-based robust 3D facial pose tracking
title A generative model for depth-based robust 3D facial pose tracking
title_full A generative model for depth-based robust 3D facial pose tracking
title_fullStr A generative model for depth-based robust 3D facial pose tracking
title_full_unstemmed A generative model for depth-based robust 3D facial pose tracking
title_short A generative model for depth-based robust 3D facial pose tracking
title_sort generative model for depth based robust 3d facial pose tracking
topic Engineering::Computer science and engineering
Computer Vision
Face Recognition
url https://hdl.handle.net/10356/138494
work_keys_str_mv AT shenglu agenerativemodelfordepthbasedrobust3dfacialposetracking
AT caijianfei agenerativemodelfordepthbasedrobust3dfacialposetracking
AT chamtatjen agenerativemodelfordepthbasedrobust3dfacialposetracking
AT pavlovicvladimir agenerativemodelfordepthbasedrobust3dfacialposetracking
AT ngankingngi agenerativemodelfordepthbasedrobust3dfacialposetracking
AT shenglu generativemodelfordepthbasedrobust3dfacialposetracking
AT caijianfei generativemodelfordepthbasedrobust3dfacialposetracking
AT chamtatjen generativemodelfordepthbasedrobust3dfacialposetracking
AT pavlovicvladimir generativemodelfordepthbasedrobust3dfacialposetracking
AT ngankingngi generativemodelfordepthbasedrobust3dfacialposetracking