Visual Experience in Temporal Situational Context: Method of Matching and Modeling in Design

Adhering closely to the phenomenological approach, a computational design system needs to incorporate visual experience to efficiently craft compelling human-centric visual designs. However, the computation of visual experience, which includes perception, cognition, emotion, and action, is challengi...

Full description

Bibliographic Details
Main Author: Peng, Wenzhe
Other Authors: Nagakura, Takehiko
Format: Thesis
Published: Massachusetts Institute of Technology 2023
Online Access:https://hdl.handle.net/1721.1/151655
Description
Summary:Adhering closely to the phenomenological approach, a computational design system needs to incorporate visual experience to efficiently craft compelling human-centric visual designs. However, the computation of visual experience, which includes perception, cognition, emotion, and action, is challenging due to its subjective, non-deterministic, and unconscious nature. Recognizing that the temporal situational context, or an individual’s perceived environment over time, can provide insights into their cognitive state and yield a more consistent visual experience than static contexts, I argue that by incorporating temporal situational context, we can better match and model visual experiences, leading to effective and empirically grounded computational phenomenological design systems. They include the development of experience-sensitive spatial design systems, human-centric human-computer interaction designs, and improved film pre-production quality and efficiency. To incorporate visual experience in design, this thesis proposes a versatile computational representation method for temporal situational context called the Temporal Framed Scene Graph (TFSG), and examined in two projects. The first project investigates the modeling of human behavior in an augmented reality exhibition using a recurrent graph network, with behavior represented in TFSG format. In the second project, considering video as an effective medium for conveying visual experience about a scene, this project utilizes TFSG-facilitated visual experience matching for shot planning and set design. Its effectiveness is assessed using quantitative and qualitative tests in real-world filming scenarios. The project results further support the thesis argument, showing that TFSG effectively captures visual experience and provides a valuable foundation for exploring the possibility of the matching and modeling of visual experience in design, leading to more efficient and human-centered design pipelines. This thesis contributes to fields including design studies, praxeology, cinematography, and AI, by presenting (1) A versatile representation of temporal situational context that computationally describes the visual experience in a scene. (2) A method that helps with film pre-production and human-centered spatial design through visual-guided optimization. (3) A context-driven multi-model system for modeling human behavior in an AR exhibition.