Computational model based emotional state recognition to assist autistic children.

Children diagnosed with autism, which affects one in every 165, are thought to lack or have impairment in some representational sets of abilities. As a result, they have difficulties operating in our highly complex social environment, and are for the most part, unable to understand other people'...

Full description

Bibliographic Details
Main Author: Teoh, Teik Toe.
Other Authors: Cho Siu-Yeung David
Format: Thesis
Language:English
Published: 2012
Subjects:
Online Access:https://hdl.handle.net/10356/48659
Description
Summary:Children diagnosed with autism, which affects one in every 165, are thought to lack or have impairment in some representational sets of abilities. As a result, they have difficulties operating in our highly complex social environment, and are for the most part, unable to understand other people's emotions. People express their emotion states all the time, even when interacting with machines. These emotion states shape the decisions that we make, govern how we communicate with others, and affect our performance. The ability to attribute emotion states to others from their behaviour, and to use that knowledge to guide one’s own actions and predict those of others is known as emotion-recognition. To allow children with autism to read and respond to the emotions of people, we propose a computer-based device called Cognitive Assistive Computational-based Emotional State Recognition to assist autistic children to understand, interpret and react to emotions of people they interact with. The system is real-time so that computation time is vital important to enable real-time interactive training for the system to learn and analyze the human emotions for autistic children. The principal contribution of this thesis is the real time inference of a wide range of emotion states from head and facial displays in a video stream, both pre-recorded (Mind Reading DVD) and live camera. In particular, the focus is on the inference of complex emotion states (agreeing, disagreeing, encouraging, discouraging and unsure), i.e., the affective and cognitive states of mind that are not part of the set of basic emotions (in our case, they are: neutral, joy, sad and surprise). The automated emotion state inference system is inspired by and draws on the fundamental role of emotion-recognition in communication and decision-making. The thesis describes the design, implementation and validation of a computational model of emotion-recognition. The design is based on the results of a number of experiments that we have undertaken to analyse the facial signals and dynamics of complex emotion states. In this research, a device will be developed with camera connecting to a computer or a mobile PC. The software, which runs on the computer or mobile PC upon received the image from camera, will recognize the emotional states and pronounce the states through earpiece to children with autism including advises. The whole recognition process is real-time and the training is interactive, such that the knowledge of the system is updated continuously. This research is believed to be the first attempts based on the combination of pattern recognition and machine learning together with a neuroscience understanding of cognitive and visual signal interplay in solving the above mentioned problems. This work proceeds with some background of Autism Spectrum Disorders (ASD). A prototype of this assistive device for children with autism will be proposed. This system can identify a facial event in real time, extracts the dynamic features from facial expressions and infers the underlying emotion state conveyed by the video segment. More works worth looking into that includes a reaction advisor that utilizes a GUI to display current emotion state inference and a recommended action, both textually and graphically. They will include the implementation of the emotional indexer using partially observed Markov decision processes, so that the utility of the actions is also learnt from data rather than hard-coded as in the current rule-based implementations, hence utilizing the information to suggest an appropriate reaction. We aim to develop the system to successfully classify and generalize to new examples of other emotion state classes with an accuracy and speed that are comparable to that of human recognition. With this system, an autistic child will make use of this aiding device which will inform them in what the facial emotion to bring them attention during the social communication. The experiments carried out show that we have achieved the important features of mobile application: speed and efficiency. The system successfully classifies and generalizes to new examples of these classes with a reasonable accuracy (about 75%) and speed (3 frames/sec) that are comparable to that of human recognition. The research we present here significantly advances the nascent ability of machines to infer cognitive-affective emotion states in real time from nonverbal expressions of people. By developing a real time system for the inference of a wide range of emotion states beyond the basic emotions, we have widened the scope of human-computer interaction scenarios in which this technology can be integrated. This is an important step towards building socially and emotionally intelligent machines.