Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisation

In electronic musical instruments (EMIs), the concept of “sound sculpting” was proposed by Mulder, in which imaginary objects are manually sculpted to produce sounds, although promising has had some limitations: driven by pure intuition, only the objects’ geometrical properties were mapped to sound,...

Full description

Bibliographic Details
Main Author: Stella Paschalidou
Format: Article
Language:English
Published: Cambridge University Press 2022-01-01
Series:Wearable Technologies
Subjects:
Online Access:https://www.cambridge.org/core/product/identifier/S2631717622000081/type/journal_article
_version_ 1811155675937505280
author Stella Paschalidou
author_facet Stella Paschalidou
author_sort Stella Paschalidou
collection DOAJ
description In electronic musical instruments (EMIs), the concept of “sound sculpting” was proposed by Mulder, in which imaginary objects are manually sculpted to produce sounds, although promising has had some limitations: driven by pure intuition, only the objects’ geometrical properties were mapped to sound, while effort—which is often regarded as a key factor of expressivity in music performance—was neglected. The aim of this paper is to enhance such digital interactions by accounting for the perceptual measure of effort that is conveyed through well-established gesture-sound links in the ecologically valid conditions of non-digital music performances. Thus, it reports on the systematic exploration of effort in Dhrupad vocal improvisation, in which singers are often observed to engage with melodic ideas by manipulating intangible, imaginary objects with their hands. The focus is devising formalized descriptions to infer the amount of effort that such interactions are perceived to require and classify gestures as interactions with elastic versus rigid objects, based on original multimodal data collected in India for the specific study. Results suggest that a good part of variance for both effort levels and gesture classes can be explained through a small set of statistically significant acoustic and movement features extracted from the raw data and lead to rejecting the null hypothesis that effort is unrelated to the musical context. This may have implications on how EMIs could benefit from effort as an intermediate mapping layer and naturally opens discussions on whether physiological data may offer a more intuitive measure of effort in wearable technologies.
first_indexed 2024-04-10T04:38:33Z
format Article
id doaj.art-52f1a00929004a6ba3916640e3b045f5
institution Directory Open Access Journal
issn 2631-7176
language English
last_indexed 2024-04-10T04:38:33Z
publishDate 2022-01-01
publisher Cambridge University Press
record_format Article
series Wearable Technologies
spelling doaj.art-52f1a00929004a6ba3916640e3b045f52023-03-09T12:43:51ZengCambridge University PressWearable Technologies2631-71762022-01-01310.1017/wtc.2022.8Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisationStella Paschalidou0https://orcid.org/0000-0002-9775-2887Hellenic Mediterranean University, School of Music and Optoacoustic Technologies, Department of Music Technology and Acoustics, GreeceIn electronic musical instruments (EMIs), the concept of “sound sculpting” was proposed by Mulder, in which imaginary objects are manually sculpted to produce sounds, although promising has had some limitations: driven by pure intuition, only the objects’ geometrical properties were mapped to sound, while effort—which is often regarded as a key factor of expressivity in music performance—was neglected. The aim of this paper is to enhance such digital interactions by accounting for the perceptual measure of effort that is conveyed through well-established gesture-sound links in the ecologically valid conditions of non-digital music performances. Thus, it reports on the systematic exploration of effort in Dhrupad vocal improvisation, in which singers are often observed to engage with melodic ideas by manipulating intangible, imaginary objects with their hands. The focus is devising formalized descriptions to infer the amount of effort that such interactions are perceived to require and classify gestures as interactions with elastic versus rigid objects, based on original multimodal data collected in India for the specific study. Results suggest that a good part of variance for both effort levels and gesture classes can be explained through a small set of statistically significant acoustic and movement features extracted from the raw data and lead to rejecting the null hypothesis that effort is unrelated to the musical context. This may have implications on how EMIs could benefit from effort as an intermediate mapping layer and naturally opens discussions on whether physiological data may offer a more intuitive measure of effort in wearable technologies.https://www.cambridge.org/core/product/identifier/S2631717622000081/type/journal_articlePerformance augmentationPerformance characterisationSensorsReal-time modelsControl
spellingShingle Stella Paschalidou
Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisation
Wearable Technologies
Performance augmentation
Performance characterisation
Sensors
Real-time models
Control
title Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisation
title_full Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisation
title_fullStr Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisation
title_full_unstemmed Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisation
title_short Effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during Dhrupad vocal improvisation
title_sort effort inference and prediction by acoustic and movement descriptors in interactions with imaginary objects during dhrupad vocal improvisation
topic Performance augmentation
Performance characterisation
Sensors
Real-time models
Control
url https://www.cambridge.org/core/product/identifier/S2631717622000081/type/journal_article
work_keys_str_mv AT stellapaschalidou effortinferenceandpredictionbyacousticandmovementdescriptorsininteractionswithimaginaryobjectsduringdhrupadvocalimprovisation