MetaMGC: a music generation framework for concerts in metaverse
Abstract In recent years, there has been a national craze for metaverse concerts. However, existing meta-universe concert efforts often focus on immersive visual experiences and lack consideration of the musical and aural experience. But for concerts, it is the beautiful music and the immersive list...
Main Authors: | , , , , , |
---|---|
Format: | Article |
Language: | English |
Published: |
SpringerOpen
2022-12-01
|
Series: | EURASIP Journal on Audio, Speech, and Music Processing |
Subjects: | |
Online Access: | https://doi.org/10.1186/s13636-022-00261-8 |
_version_ | 1811196649198845952 |
---|---|
author | Cong Jin Fengjuan Wu Jing Wang Yang Liu Zixuan Guan Zhe Han |
author_facet | Cong Jin Fengjuan Wu Jing Wang Yang Liu Zixuan Guan Zhe Han |
author_sort | Cong Jin |
collection | DOAJ |
description | Abstract In recent years, there has been a national craze for metaverse concerts. However, existing meta-universe concert efforts often focus on immersive visual experiences and lack consideration of the musical and aural experience. But for concerts, it is the beautiful music and the immersive listening experience that deserve the most attention. Therefore, enhancing intelligent and immersive musical experiences is essential for the further development of the metaverse. With this in mind, we propose a metaverse concert generation framework — from intelligent music generation to stereo conversion and sound field design for virtual concert stages. First, combining the ideas of reinforcement learning and value functions, the Transformer-XL music generation network is improved and used in training all the music in the POP909 dataset. Experiments show that both improved algorithms have advantages over the original method in terms of objective evaluation and subjective evaluation metrics. In addition, this paper validates a neural rendering method that can be used to generate spatial audio based on a binaural-integrated neural network with a fully convolutional technique. And the purely data-driven end-to-end model performs to be more reliable compared with traditional spatial audio generation methods such as HRTF. Finally, we propose a metadata-based audio rendering algorithm to simulate real-world acoustic environments. |
first_indexed | 2024-04-12T01:02:31Z |
format | Article |
id | doaj.art-a0b787342d504f6e85c8c0bf2c9d3f67 |
institution | Directory Open Access Journal |
issn | 1687-4722 |
language | English |
last_indexed | 2024-04-12T01:02:31Z |
publishDate | 2022-12-01 |
publisher | SpringerOpen |
record_format | Article |
series | EURASIP Journal on Audio, Speech, and Music Processing |
spelling | doaj.art-a0b787342d504f6e85c8c0bf2c9d3f672022-12-22T03:54:25ZengSpringerOpenEURASIP Journal on Audio, Speech, and Music Processing1687-47222022-12-012022111510.1186/s13636-022-00261-8MetaMGC: a music generation framework for concerts in metaverseCong Jin0Fengjuan Wu1Jing Wang2Yang Liu3Zixuan Guan4Zhe Han5School of Information and Communication Engineering, Communication University of ChinaSchool of Information and Communication Engineering, Communication University of ChinaSchool of Information and Electronicsy, Beijing Institute of TechnologySchool of Information and Communication Engineering, Communication University of ChinaSchool of Information and Communication Engineering, Communication University of ChinaSchool of Information and Communication Engineering, Communication University of ChinaAbstract In recent years, there has been a national craze for metaverse concerts. However, existing meta-universe concert efforts often focus on immersive visual experiences and lack consideration of the musical and aural experience. But for concerts, it is the beautiful music and the immersive listening experience that deserve the most attention. Therefore, enhancing intelligent and immersive musical experiences is essential for the further development of the metaverse. With this in mind, we propose a metaverse concert generation framework — from intelligent music generation to stereo conversion and sound field design for virtual concert stages. First, combining the ideas of reinforcement learning and value functions, the Transformer-XL music generation network is improved and used in training all the music in the POP909 dataset. Experiments show that both improved algorithms have advantages over the original method in terms of objective evaluation and subjective evaluation metrics. In addition, this paper validates a neural rendering method that can be used to generate spatial audio based on a binaural-integrated neural network with a fully convolutional technique. And the purely data-driven end-to-end model performs to be more reliable compared with traditional spatial audio generation methods such as HRTF. Finally, we propose a metadata-based audio rendering algorithm to simulate real-world acoustic environments.https://doi.org/10.1186/s13636-022-00261-8Metaverse concertTransformer-XLAudio digital twinNeural networkAudio rendering |
spellingShingle | Cong Jin Fengjuan Wu Jing Wang Yang Liu Zixuan Guan Zhe Han MetaMGC: a music generation framework for concerts in metaverse EURASIP Journal on Audio, Speech, and Music Processing Metaverse concert Transformer-XL Audio digital twin Neural network Audio rendering |
title | MetaMGC: a music generation framework for concerts in metaverse |
title_full | MetaMGC: a music generation framework for concerts in metaverse |
title_fullStr | MetaMGC: a music generation framework for concerts in metaverse |
title_full_unstemmed | MetaMGC: a music generation framework for concerts in metaverse |
title_short | MetaMGC: a music generation framework for concerts in metaverse |
title_sort | metamgc a music generation framework for concerts in metaverse |
topic | Metaverse concert Transformer-XL Audio digital twin Neural network Audio rendering |
url | https://doi.org/10.1186/s13636-022-00261-8 |
work_keys_str_mv | AT congjin metamgcamusicgenerationframeworkforconcertsinmetaverse AT fengjuanwu metamgcamusicgenerationframeworkforconcertsinmetaverse AT jingwang metamgcamusicgenerationframeworkforconcertsinmetaverse AT yangliu metamgcamusicgenerationframeworkforconcertsinmetaverse AT zixuanguan metamgcamusicgenerationframeworkforconcertsinmetaverse AT zhehan metamgcamusicgenerationframeworkforconcertsinmetaverse |