Music generation with deep learning techniques
This report demonstrated the use of a deep convolutional generative adversarial network (DCGAN) in generating expressive music with dynamics. The existing deep learning models for music generation were reviewed. However, most research focused on musical composition and removed expressive attributes...
Main Author: | Toh, Raymond Kwan How |
---|---|
Other Authors: | Alexei Sourin |
Format: | Final Year Project (FYP) |
Language: | English |
Published: |
Nanyang Technological University
2021
|
Subjects: | |
Online Access: | https://hdl.handle.net/10356/148097 |
Similar Items
-
Deep learning techniques and sentiment analysis
by: Kwan, Yu Ting
Published: (2021) -
Integrity and Improvisation in the Music of Handel
by: Harris, Ellen T.
Published: (2018) -
Emotikon : audio production of the interpretation of human emotion through soundscape and music
by: De Cotta, Timothy Alexander, et al.
Published: (2011) -
Creating a visual music interface via emotion detection
by: Lim, Clement Shi Hong.
Published: (2010) -
Defences and threats in safe deep learning
by: Chan, Alvin Guo Wei
Published: (2021)