ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes
The ARAUS (Affective Responses to Augmented Urban Soundscapes) dataset consists of a five-fold cross-validation set and independent test set of subjective perceptual responses to augmented soundscapes presented as audio-visual stimuli. However, key limitations in its original release included a disp...
Main Authors: | , , , , , |
---|---|
Other Authors: | |
Format: | Conference Paper |
Language: | English |
Published: |
2023
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/168665 https://internoise2023.org/program/ |
_version_ | 1826111986978521088 |
---|---|
author | Ooi, Kenneth Ong, Zhen-Ting Lam, Bhan Wong, Trevor Gan, Woon-Seng Watcharasupat, Karn N. |
author2 | School of Electrical and Electronic Engineering |
author_facet | School of Electrical and Electronic Engineering Ooi, Kenneth Ong, Zhen-Ting Lam, Bhan Wong, Trevor Gan, Woon-Seng Watcharasupat, Karn N. |
author_sort | Ooi, Kenneth |
collection | NTU |
description | The ARAUS (Affective Responses to Augmented Urban Soundscapes) dataset consists of a five-fold cross-validation set and independent test set of subjective perceptual responses to augmented soundscapes presented as audio-visual stimuli. However, key limitations in its original release included a disproportionate number of participants being young university students and a relatively small test set. We aim to address this by publishing ARAUSv2, which adds responses from 60 participants to the cross-validation from an older, non-student population, as well as responses from additional participants in a substantially larger test set consisting of new urban soundscapes recorded in a variety of settings in Singapore. The additional responses were collected in a similar fashion as the initial release, with participants rating augmented soundscapes (made by digitally adding maskers to urban soundscape recordings) on how pleasant, annoying, eventful, uneventful, vibrant, monotonous, chaotic, calm, and appropriate they were. We also present a sample of multimodal prediction models for the ISO Pleasantness and Eventfulness of the augmented soundscapes in ARAUSv2. The multimodal models use participant-linked information such as demographics and responses to psychological questionnaires, as well as visual information from the stimuli, which the baseline models presented in the initial ARAUS dataset did not utilize. |
first_indexed | 2024-10-01T02:59:52Z |
format | Conference Paper |
id | ntu-10356/168665 |
institution | Nanyang Technological University |
language | English |
last_indexed | 2024-10-01T02:59:52Z |
publishDate | 2023 |
record_format | dspace |
spelling | ntu-10356/1686652023-09-22T15:39:04Z ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes Ooi, Kenneth Ong, Zhen-Ting Lam, Bhan Wong, Trevor Gan, Woon-Seng Watcharasupat, Karn N. School of Electrical and Electronic Engineering 52nd International Congress and Exposition on Noise Control Engineering (Inter-Noise 2023) Science::Physics::Acoustics Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Soundscape Dataset Regression Deep Neural Network Soundscape Augmentation Auditory Masking The ARAUS (Affective Responses to Augmented Urban Soundscapes) dataset consists of a five-fold cross-validation set and independent test set of subjective perceptual responses to augmented soundscapes presented as audio-visual stimuli. However, key limitations in its original release included a disproportionate number of participants being young university students and a relatively small test set. We aim to address this by publishing ARAUSv2, which adds responses from 60 participants to the cross-validation from an older, non-student population, as well as responses from additional participants in a substantially larger test set consisting of new urban soundscapes recorded in a variety of settings in Singapore. The additional responses were collected in a similar fashion as the initial release, with participants rating augmented soundscapes (made by digitally adding maskers to urban soundscape recordings) on how pleasant, annoying, eventful, uneventful, vibrant, monotonous, chaotic, calm, and appropriate they were. We also present a sample of multimodal prediction models for the ISO Pleasantness and Eventfulness of the augmented soundscapes in ARAUSv2. The multimodal models use participant-linked information such as demographics and responses to psychological questionnaires, as well as visual information from the stimuli, which the baseline models presented in the initial ARAUS dataset did not utilize. Ministry of National Development (MND) National Research Foundation (NRF) Submitted/Accepted version This work was supported by the National Research Foundation, Singapore, and Ministry of National Development, Singapore under the Cities of Tomorrow R&D Program (CoT Award: COT-V4-2020-1). 2023-09-18T01:40:25Z 2023-09-18T01:40:25Z 2023 Conference Paper Ooi, K., Ong, Z., Lam, B., Wong, T., Gan, W. & Watcharasupat, K. N. (2023). ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes. 52nd International Congress and Exposition on Noise Control Engineering (Inter-Noise 2023). https://hdl.handle.net/10356/168665 https://internoise2023.org/program/ en COT-V4-2020-1 10.21979/N9/9OTEVX © 2023 The Author(s). All rights reserved. This paper was published in the Proceedings of 52nd International Congress and Exposition on Noise Control Engineering (Inter-Noise 2023) and is made available with permission of The Author(s). application/pdf |
spellingShingle | Science::Physics::Acoustics Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Soundscape Dataset Regression Deep Neural Network Soundscape Augmentation Auditory Masking Ooi, Kenneth Ong, Zhen-Ting Lam, Bhan Wong, Trevor Gan, Woon-Seng Watcharasupat, Karn N. ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes |
title | ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes |
title_full | ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes |
title_fullStr | ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes |
title_full_unstemmed | ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes |
title_short | ARAUSv2: an expanded dataset and multimodal models of affective responses to augmented urban soundscapes |
title_sort | arausv2 an expanded dataset and multimodal models of affective responses to augmented urban soundscapes |
topic | Science::Physics::Acoustics Engineering::Computer science and engineering::Computing methodologies::Artificial intelligence Soundscape Dataset Regression Deep Neural Network Soundscape Augmentation Auditory Masking |
url | https://hdl.handle.net/10356/168665 https://internoise2023.org/program/ |
work_keys_str_mv | AT ooikenneth arausv2anexpandeddatasetandmultimodalmodelsofaffectiveresponsestoaugmentedurbansoundscapes AT ongzhenting arausv2anexpandeddatasetandmultimodalmodelsofaffectiveresponsestoaugmentedurbansoundscapes AT lambhan arausv2anexpandeddatasetandmultimodalmodelsofaffectiveresponsestoaugmentedurbansoundscapes AT wongtrevor arausv2anexpandeddatasetandmultimodalmodelsofaffectiveresponsestoaugmentedurbansoundscapes AT ganwoonseng arausv2anexpandeddatasetandmultimodalmodelsofaffectiveresponsestoaugmentedurbansoundscapes AT watcharasupatkarnn arausv2anexpandeddatasetandmultimodalmodelsofaffectiveresponsestoaugmentedurbansoundscapes |