Autonomous soundscape augmentation with multimodal fusion of visual and participant-linked inputs
Autonomous soundscape augmentation systems typically use trained models to pick optimal maskers to effect a desired perceptual change. While acoustic information is paramount to such systems, contextual information, including participant demographics and the visual environment, also influences acous...
Main Authors: | Ooi, Kenneth, Watcharasupat, Karn, Lam, Bhan, Ong, Zhen-Ting, Gan, Woon-Seng |
---|---|
Other Authors: | School of Electrical and Electronic Engineering |
Format: | Conference Paper |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/165017 |
Similar Items
-
ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes
by: Ooi, Kenneth, et al.
Published: (2023) -
Effect of masker selection schemes on the perceived affective quality of soundscapes: a pilot study
by: Ong, Zhen-Ting, et al.
Published: (2023) -
Probably pleasant? A neural-probabilistic approach to automatic masker selection for urban soundscape augmentation
by: Ooi, Kenenth, et al.
Published: (2022) -
A benchmark comparison of perceptual models for soundscapes on a large-scale augmented soundscape dataset
by: Ooi, Kenneth, et al.
Published: (2023) -
Effects of adding natural sounds to urban noises on the perceived loudness of noise and soundscape quality
by: Hong, Joo Young, et al.
Published: (2021)