Measuring 3D face deformations from RGB images of expression rehabilitation exercises
Background: The accurate (quantitative) analysis of face deformations in 3D is a problem of increasing interest for the many applications it may have. In particular, defining a 3D model of the face that can deform to a 2D target image, while capturing local and asymmetric deformations is still a cha...
Main Authors: | , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
KeAi Communications Co., Ltd.
2022-08-01
|
Series: | Virtual Reality & Intelligent Hardware |
Subjects: | |
Online Access: | http://www.sciencedirect.com/science/article/pii/S2096579622000456 |
_version_ | 1811274749579362304 |
---|---|
author | Claudio Ferrari Stefano Berretti Pietro Pala Alberto Del Bimbo |
author_facet | Claudio Ferrari Stefano Berretti Pietro Pala Alberto Del Bimbo |
author_sort | Claudio Ferrari |
collection | DOAJ |
description | Background: The accurate (quantitative) analysis of face deformations in 3D is a problem of increasing interest for the many applications it may have. In particular, defining a 3D model of the face that can deform to a 2D target image, while capturing local and asymmetric deformations is still a challenge in the existing literature. Computing a measure of such local deformations may represent a relevant index for monitoring rehabilitation exercises that are used in Parkinson’s and Alzheimer’s disease or in recovering from a stroke. Methods: In this study, we present a complete framework that allows the construction of a 3D Morphable Shape Model (3DMM) of the face and its fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation; the fitting transformation is performed from 3D to 2D and is guided by the correspondence between landmarks detected in the target image and landmarks manually annotated on the average 3DMM. The fitting has also the peculiarity of being performed in two steps, disentangling face deformations that are due to the identity of the target subject from those induced by facial actions. Results: In the experimental validation of the method, we used the MICC-3D dataset that includes 11 subjects each acquired in one neutral pose plus 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, we fit the 3DMM to an RGB frame with an apex facial action and to the neutral frame, and computed the extent of the deformation. Results indicated that the proposed approach can accurately capture the face deformation even for localized and asymmetric ones. Conclusions: The proposed framework proved the idea of measuring the deformations of a reconstructed 3D face model to monitor the facial actions performed in response to a set of target ones. Interestingly, these results were obtained just using RGB targets without the need for 3D scans captured with costly devices. This opens the way to the use of the proposed tool for remote medical monitoring of rehabilitation. |
first_indexed | 2024-04-12T23:25:42Z |
format | Article |
id | doaj.art-194a291e4eff43f8b44f9dc4079bb513 |
institution | Directory Open Access Journal |
issn | 2096-5796 |
language | English |
last_indexed | 2024-04-12T23:25:42Z |
publishDate | 2022-08-01 |
publisher | KeAi Communications Co., Ltd. |
record_format | Article |
series | Virtual Reality & Intelligent Hardware |
spelling | doaj.art-194a291e4eff43f8b44f9dc4079bb5132022-12-22T03:12:25ZengKeAi Communications Co., Ltd.Virtual Reality & Intelligent Hardware2096-57962022-08-0144306323Measuring 3D face deformations from RGB images of expression rehabilitation exercisesClaudio Ferrari0Stefano Berretti1Pietro Pala2Alberto Del Bimbo3Department of Engineering and Architecture, University of Parma, Parma, 43124, ITALY; Corresponding author.Department of Information Engineering, University of Florence, Florence, 50139, ITALYDepartment of Information Engineering, University of Florence, Florence, 50139, ITALYDepartment of Information Engineering, University of Florence, Florence, 50139, ITALYBackground: The accurate (quantitative) analysis of face deformations in 3D is a problem of increasing interest for the many applications it may have. In particular, defining a 3D model of the face that can deform to a 2D target image, while capturing local and asymmetric deformations is still a challenge in the existing literature. Computing a measure of such local deformations may represent a relevant index for monitoring rehabilitation exercises that are used in Parkinson’s and Alzheimer’s disease or in recovering from a stroke. Methods: In this study, we present a complete framework that allows the construction of a 3D Morphable Shape Model (3DMM) of the face and its fitting to a target RGB image. The model has the specific characteristic of being based on localized components of deformation; the fitting transformation is performed from 3D to 2D and is guided by the correspondence between landmarks detected in the target image and landmarks manually annotated on the average 3DMM. The fitting has also the peculiarity of being performed in two steps, disentangling face deformations that are due to the identity of the target subject from those induced by facial actions. Results: In the experimental validation of the method, we used the MICC-3D dataset that includes 11 subjects each acquired in one neutral pose plus 18 facial actions that deform the face in localized and asymmetric ways. For each acquisition, we fit the 3DMM to an RGB frame with an apex facial action and to the neutral frame, and computed the extent of the deformation. Results indicated that the proposed approach can accurately capture the face deformation even for localized and asymmetric ones. Conclusions: The proposed framework proved the idea of measuring the deformations of a reconstructed 3D face model to monitor the facial actions performed in response to a set of target ones. Interestingly, these results were obtained just using RGB targets without the need for 3D scans captured with costly devices. This opens the way to the use of the proposed tool for remote medical monitoring of rehabilitation.http://www.sciencedirect.com/science/article/pii/S20965796220004563D Morphable Face ModelSparse and Locally Coherent 3DMM ComponentsLocal and asymmetric face deformationsFace rehabilitationFace deformation measure |
spellingShingle | Claudio Ferrari Stefano Berretti Pietro Pala Alberto Del Bimbo Measuring 3D face deformations from RGB images of expression rehabilitation exercises Virtual Reality & Intelligent Hardware 3D Morphable Face Model Sparse and Locally Coherent 3DMM Components Local and asymmetric face deformations Face rehabilitation Face deformation measure |
title | Measuring 3D face deformations from RGB images of expression rehabilitation exercises |
title_full | Measuring 3D face deformations from RGB images of expression rehabilitation exercises |
title_fullStr | Measuring 3D face deformations from RGB images of expression rehabilitation exercises |
title_full_unstemmed | Measuring 3D face deformations from RGB images of expression rehabilitation exercises |
title_short | Measuring 3D face deformations from RGB images of expression rehabilitation exercises |
title_sort | measuring 3d face deformations from rgb images of expression rehabilitation exercises |
topic | 3D Morphable Face Model Sparse and Locally Coherent 3DMM Components Local and asymmetric face deformations Face rehabilitation Face deformation measure |
url | http://www.sciencedirect.com/science/article/pii/S2096579622000456 |
work_keys_str_mv | AT claudioferrari measuring3dfacedeformationsfromrgbimagesofexpressionrehabilitationexercises AT stefanoberretti measuring3dfacedeformationsfromrgbimagesofexpressionrehabilitationexercises AT pietropala measuring3dfacedeformationsfromrgbimagesofexpressionrehabilitationexercises AT albertodelbimbo measuring3dfacedeformationsfromrgbimagesofexpressionrehabilitationexercises |