An integration framework for haptic feedback to improve facial expression on virtual human

Most of the latest 3D humanoid models that are currently available can produce emotions only through facial expressions, gestures and voice. Only a few humanoid models are capable of manipulating haptic tactile emotions through vibrations. This study proposes a system, in which haptic feedback is in...

Full description

Bibliographic Details
Main Authors: Basori, Ahmad Hoirul, Abdullah, Bade, Sunar, Mohd. Shahrizal, Nadzaari, Saari, Daman, Daut, Salam, Md. Sah
Format: Article
Published: 2012
Subjects:
Description
Summary:Most of the latest 3D humanoid models that are currently available can produce emotions only through facial expressions, gestures and voice. Only a few humanoid models are capable of manipulating haptic tactile emotions through vibrations. This study proposes a system, in which haptic feedback is integrated based on visual, acoustic, and haptic cues. This integrated framework is based on two major techniques: emotion vibration mapping and facial expression synthesis. In emotion vibration mapping, mapping is carried out by first scaling the joystick wavelength to the visible light spectrum. Then, a linear equation describing the magnitude force is created by using joystick wavelength and magnitude force data. Finally, the wavelength of visible light spectrum is used as parameter to compute the joystick wavelength by using a linear interpolation method, and then, emotions are generated and a complete classification table is stored for each emotion value. In facial expression synthesis, a combination of Action Units (AUs) is used to generate certain emotion expressions in the 3D humanoid model face based on Facial Action Coding System (FACS). Each action unit is characterized by its specific face region, and each face region has a special lighting colour to differentiate its appearance. The colour of light is obtained from the emotion classification table generated in the emotionvibration mapping process. Furthermore, the integration proceeds with rendering the facial expression, generating an acoustic effect from an emotional sound, and adjusting the loudness level according to the emotion value. Finally, the magnitude force of the haptic device, which is simultaneously adjusted after the synchronization of visual and acoustic cues, is integrated. In this study, a mind controller and a glove are used to capture user emotions in real time. The mind controller determines the type of emotion according to the brain activity of the user, whereas the glove controls the intensity of an emotion. The results from experiment show that 6r/% of the participants gave strongly positive responses to the system,. In addition, 15 of 21 (71%) participants agreed with the classification of the magnitude force into the emotion representation. Most of the users remarked that a high magnitude force created a sensation similar to anger, whereas a low magnitude force created a more relaxing sensation.