Music Gesture for Visual Sound Separation

Recent deep learning approaches have achieved impressive performance on visual sound separation tasks. However, these approaches are mostly built on appearance and optical flow like motion feature representations, which exhibit limited abilities to find the correlations between audio signals and vis...

Full description

Bibliographic Details
Main Authors: Gan, Chuang, Huang, Deng, Zhao, Hang, Tenenbaum, Joshua B, Torralba, Antonio
Other Authors: MIT-IBM Watson AI Lab
Format: Article
Language:English
Published: Institute of Electrical and Electronics Engineers (IEEE) 2021
Online Access:https://hdl.handle.net/1721.1/130393
_version_ 1826198528286785536
author Gan, Chuang
Huang, Deng
Zhao, Hang
Tenenbaum, Joshua B
Torralba, Antonio
author2 MIT-IBM Watson AI Lab
author_facet MIT-IBM Watson AI Lab
Gan, Chuang
Huang, Deng
Zhao, Hang
Tenenbaum, Joshua B
Torralba, Antonio
author_sort Gan, Chuang
collection MIT
description Recent deep learning approaches have achieved impressive performance on visual sound separation tasks. However, these approaches are mostly built on appearance and optical flow like motion feature representations, which exhibit limited abilities to find the correlations between audio signals and visual points, especially when separating multiple instruments of the same types, such as multiple violins in a scene. To address this, we propose ''Music Gesture,' a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music. We first adopt a context-aware graph network to integrate visual semantic context with body dynamics and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals. Experimental results on three music performance datasets show: 1) strong improvements upon benchmark metrics for hetero-musical separation tasks (i.e. different instruments); 2) new ability for effective homo-musical separation for piano, flute, and trumpet duets, which to our best knowledge has never been achieved with alternative methods.
first_indexed 2024-09-23T11:06:16Z
format Article
id mit-1721.1/130393
institution Massachusetts Institute of Technology
language English
last_indexed 2024-09-23T11:06:16Z
publishDate 2021
publisher Institute of Electrical and Electronics Engineers (IEEE)
record_format dspace
spelling mit-1721.1/1303932022-09-27T17:11:06Z Music Gesture for Visual Sound Separation Gan, Chuang Huang, Deng Zhao, Hang Tenenbaum, Joshua B Torralba, Antonio MIT-IBM Watson AI Lab Massachusetts Institute of Technology. Department of Electrical Engineering and Computer Science Recent deep learning approaches have achieved impressive performance on visual sound separation tasks. However, these approaches are mostly built on appearance and optical flow like motion feature representations, which exhibit limited abilities to find the correlations between audio signals and visual points, especially when separating multiple instruments of the same types, such as multiple violins in a scene. To address this, we propose ''Music Gesture,' a keypoint-based structured representation to explicitly model the body and finger movements of musicians when they perform music. We first adopt a context-aware graph network to integrate visual semantic context with body dynamics and then apply an audio-visual fusion model to associate body movements with the corresponding audio signals. Experimental results on three music performance datasets show: 1) strong improvements upon benchmark metrics for hetero-musical separation tasks (i.e. different instruments); 2) new ability for effective homo-musical separation for piano, flute, and trumpet duets, which to our best knowledge has never been achieved with alternative methods. 2021-04-06T16:27:33Z 2021-04-06T16:27:33Z 2020-08 2020-06 2021-01-28T15:51:14Z Article http://purl.org/eprint/type/ConferencePaper 9781728171685 https://hdl.handle.net/1721.1/130393 Gan, Chuang et al. "Music Gesture for Visual Sound Separation." 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, June 2020, Seattle, Washingston, Institute of Electrical and Electronics Engineers, August 2020. © 2020 IEEE en http://dx.doi.org/10.1109/cvpr42600.2020.01049 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Creative Commons Attribution-Noncommercial-Share Alike http://creativecommons.org/licenses/by-nc-sa/4.0/ application/pdf Institute of Electrical and Electronics Engineers (IEEE) arXiv
spellingShingle Gan, Chuang
Huang, Deng
Zhao, Hang
Tenenbaum, Joshua B
Torralba, Antonio
Music Gesture for Visual Sound Separation
title Music Gesture for Visual Sound Separation
title_full Music Gesture for Visual Sound Separation
title_fullStr Music Gesture for Visual Sound Separation
title_full_unstemmed Music Gesture for Visual Sound Separation
title_short Music Gesture for Visual Sound Separation
title_sort music gesture for visual sound separation
url https://hdl.handle.net/1721.1/130393
work_keys_str_mv AT ganchuang musicgestureforvisualsoundseparation
AT huangdeng musicgestureforvisualsoundseparation
AT zhaohang musicgestureforvisualsoundseparation
AT tenenbaumjoshuab musicgestureforvisualsoundseparation
AT torralbaantonio musicgestureforvisualsoundseparation