Summary: | Several attempts have been made to synthesize speech from text. However, existing methods tend to generate speech that sound artificial and lack emotional content. In this project, we investigate using Generative Adversarial Networks (GANs) to generate emotional speech.
WaveGAN (2019) was a first attempt at generating speech using raw audio waveforms. It produced natural sounding audio, including speech, bird chirpings and drums. In this project, we applied WaveGAN to emotional speech data from The Ryerson Audio-Visual Database of Emotional Speech and Song (RAVDESS), using all 8 categories of emotion. We performed modifications on WaveGAN using advanced conditioning strategies, namely Sparse Vector Conditioning and introducing Auxiliary Classifiers. In experiments conducted with human listeners, we found that these methods greatly aided subjects in identifying the generated emotions correctly, and improved ease of intelligibility and quality of generated samples.
|