Creating musical features using multi-faceted, multi-task encoders based on transformers

Abstract Computational machine intelligence approaches have enabled a variety of music-centric technologies in support of creating, sharing and interacting with music content. A strong performance on specific downstream application tasks, such as music genre detection and music emotion recognition,...

Full description

Bibliographic Details
Main Authors: Timothy Greer, Xuan Shi, Benjamin Ma, Shrikanth Narayanan
Format: Article
Language:English
Published: Nature Portfolio 2023-07-01
Series:Scientific Reports
Online Access:https://doi.org/10.1038/s41598-023-36714-z
_version_ 1797784699510194176
author Timothy Greer
Xuan Shi
Benjamin Ma
Shrikanth Narayanan
author_facet Timothy Greer
Xuan Shi
Benjamin Ma
Shrikanth Narayanan
author_sort Timothy Greer
collection DOAJ
description Abstract Computational machine intelligence approaches have enabled a variety of music-centric technologies in support of creating, sharing and interacting with music content. A strong performance on specific downstream application tasks, such as music genre detection and music emotion recognition, is paramount to ensuring broad capabilities for computational music understanding and Music Information Retrieval. Traditional approaches have relied on supervised learning to train models to support these music-related tasks. However, such approaches require copious annotated data and still may only provide insight into one view of music—namely, that related to the specific task at hand. We present a new model for generating audio-musical features that support music understanding, leveraging self-supervision and cross-domain learning. After pre-training using masked reconstruction of musical input features using self-attention bidirectional transformers, output representations are fine-tuned using several downstream music understanding tasks. Results show that the features generated by our multi-faceted, multi-task, music transformer model, which we call M3BERT, tend to outperform other audio and music embeddings on several diverse music-related tasks, indicating the potential of self-supervised and semi-supervised learning approaches toward a more generalized and robust computational approach to modeling music. Our work can offer a starting point for many music-related modeling tasks, with potential applications in learning deep representations and enabling robust technology applications.
first_indexed 2024-03-13T00:43:37Z
format Article
id doaj.art-71ffd2d8d0cb4c529c897f9d89fd6f9b
institution Directory Open Access Journal
issn 2045-2322
language English
last_indexed 2024-03-13T00:43:37Z
publishDate 2023-07-01
publisher Nature Portfolio
record_format Article
series Scientific Reports
spelling doaj.art-71ffd2d8d0cb4c529c897f9d89fd6f9b2023-07-09T11:10:53ZengNature PortfolioScientific Reports2045-23222023-07-0113111410.1038/s41598-023-36714-zCreating musical features using multi-faceted, multi-task encoders based on transformersTimothy Greer0Xuan Shi1Benjamin Ma2Shrikanth Narayanan3Signal Analysis and Interpretation Lab, University of Southern CaliforniaSignal Analysis and Interpretation Lab, University of Southern CaliforniaSignal Analysis and Interpretation Lab, University of Southern CaliforniaSignal Analysis and Interpretation Lab, University of Southern CaliforniaAbstract Computational machine intelligence approaches have enabled a variety of music-centric technologies in support of creating, sharing and interacting with music content. A strong performance on specific downstream application tasks, such as music genre detection and music emotion recognition, is paramount to ensuring broad capabilities for computational music understanding and Music Information Retrieval. Traditional approaches have relied on supervised learning to train models to support these music-related tasks. However, such approaches require copious annotated data and still may only provide insight into one view of music—namely, that related to the specific task at hand. We present a new model for generating audio-musical features that support music understanding, leveraging self-supervision and cross-domain learning. After pre-training using masked reconstruction of musical input features using self-attention bidirectional transformers, output representations are fine-tuned using several downstream music understanding tasks. Results show that the features generated by our multi-faceted, multi-task, music transformer model, which we call M3BERT, tend to outperform other audio and music embeddings on several diverse music-related tasks, indicating the potential of self-supervised and semi-supervised learning approaches toward a more generalized and robust computational approach to modeling music. Our work can offer a starting point for many music-related modeling tasks, with potential applications in learning deep representations and enabling robust technology applications.https://doi.org/10.1038/s41598-023-36714-z
spellingShingle Timothy Greer
Xuan Shi
Benjamin Ma
Shrikanth Narayanan
Creating musical features using multi-faceted, multi-task encoders based on transformers
Scientific Reports
title Creating musical features using multi-faceted, multi-task encoders based on transformers
title_full Creating musical features using multi-faceted, multi-task encoders based on transformers
title_fullStr Creating musical features using multi-faceted, multi-task encoders based on transformers
title_full_unstemmed Creating musical features using multi-faceted, multi-task encoders based on transformers
title_short Creating musical features using multi-faceted, multi-task encoders based on transformers
title_sort creating musical features using multi faceted multi task encoders based on transformers
url https://doi.org/10.1038/s41598-023-36714-z
work_keys_str_mv AT timothygreer creatingmusicalfeaturesusingmultifacetedmultitaskencodersbasedontransformers
AT xuanshi creatingmusicalfeaturesusingmultifacetedmultitaskencodersbasedontransformers
AT benjaminma creatingmusicalfeaturesusingmultifacetedmultitaskencodersbasedontransformers
AT shrikanthnarayanan creatingmusicalfeaturesusingmultifacetedmultitaskencodersbasedontransformers