Voice conversion using deep neural networks

This thesis focuses on techniques to improve the performance of voice conversion. Voice conversion modifies the recorded speech of a source speaker towards a given target speaker. The resultant speech is to sound like the target speaker with the language content unchanged. This technology has been ap...

Full description

Bibliographic Details
Main Author: Nguyen, Quy Hy
Other Authors: Chng Eng Siong
Format: Thesis
Language:English
Published: 2017
Subjects:
Online Access:http://hdl.handle.net/10356/72102
_version_ 1811688760024236032
author Nguyen, Quy Hy
author2 Chng Eng Siong
author_facet Chng Eng Siong
Nguyen, Quy Hy
author_sort Nguyen, Quy Hy
collection NTU
description This thesis focuses on techniques to improve the performance of voice conversion. Voice conversion modifies the recorded speech of a source speaker towards a given target speaker. The resultant speech is to sound like the target speaker with the language content unchanged. This technology has been applied to create personalized voice in text-to-speech or virtual avatar, speech-to-singing synthesis or spoofing attacks in speaker verification systems. To perform voice conversion, the usual approach is to create a conversion functions which is applied on the source speaker’s speech features such as timbre and prosodic features, to generate the corresponding target features. In this past decade, most of voice conversion researches had focused on spectral mapping, i.e. conversion of the features representing the timbre characteristics in a frame by frame manner. In chapter 3, we investigate a comprehensive approach to train the conversion function using DNN which considers both timbre and prosodic features simultaneously. For better modelling, we have used high-dimension spectral features. However, this further worsen the ability to robustly train a DNN which typically requires large amount of training data. To overcome the issue of limited training data, we propose a new pretraining process using autoencoder. The experimental results show the proposed comprehensive framework with pretraining performs better than conventional voice conversion systems including the state-of-the-art GMM-based system. The technique introduced in chapter 3 only learns a DNN system to convert between a pair of speaker. To reduce the need for parallel training data of new speaker pair, in chapter 4 we examine a novel DNN adaptation technology for voice conversion by including two bias vector representing both source and target speaker. By this configuration, new speaker pair conversion are archived. Our preliminary results show that conversion to new target speakers’ voices could be achieved.
first_indexed 2024-10-01T05:37:19Z
format Thesis
id ntu-10356/72102
institution Nanyang Technological University
language English
last_indexed 2024-10-01T05:37:19Z
publishDate 2017
record_format dspace
spelling ntu-10356/721022023-03-04T00:47:46Z Voice conversion using deep neural networks Nguyen, Quy Hy Chng Eng Siong School of Computer Science and Engineering DRNTU::Science DRNTU::Engineering::Computer science and engineering This thesis focuses on techniques to improve the performance of voice conversion. Voice conversion modifies the recorded speech of a source speaker towards a given target speaker. The resultant speech is to sound like the target speaker with the language content unchanged. This technology has been applied to create personalized voice in text-to-speech or virtual avatar, speech-to-singing synthesis or spoofing attacks in speaker verification systems. To perform voice conversion, the usual approach is to create a conversion functions which is applied on the source speaker’s speech features such as timbre and prosodic features, to generate the corresponding target features. In this past decade, most of voice conversion researches had focused on spectral mapping, i.e. conversion of the features representing the timbre characteristics in a frame by frame manner. In chapter 3, we investigate a comprehensive approach to train the conversion function using DNN which considers both timbre and prosodic features simultaneously. For better modelling, we have used high-dimension spectral features. However, this further worsen the ability to robustly train a DNN which typically requires large amount of training data. To overcome the issue of limited training data, we propose a new pretraining process using autoencoder. The experimental results show the proposed comprehensive framework with pretraining performs better than conventional voice conversion systems including the state-of-the-art GMM-based system. The technique introduced in chapter 3 only learns a DNN system to convert between a pair of speaker. To reduce the need for parallel training data of new speaker pair, in chapter 4 we examine a novel DNN adaptation technology for voice conversion by including two bias vector representing both source and target speaker. By this configuration, new speaker pair conversion are archived. Our preliminary results show that conversion to new target speakers’ voices could be achieved. Master of Engineering (SCE) 2017-05-25T08:57:53Z 2017-05-25T08:57:53Z 2017 Thesis Nguyen, Q. H. (2017). Voice conversion using deep neural networks. Master's thesis, Nanyang Technological University, Singapore. http://hdl.handle.net/10356/72102 10.32657/10356/72102 en 56 p. application/pdf
spellingShingle DRNTU::Science
DRNTU::Engineering::Computer science and engineering
Nguyen, Quy Hy
Voice conversion using deep neural networks
title Voice conversion using deep neural networks
title_full Voice conversion using deep neural networks
title_fullStr Voice conversion using deep neural networks
title_full_unstemmed Voice conversion using deep neural networks
title_short Voice conversion using deep neural networks
title_sort voice conversion using deep neural networks
topic DRNTU::Science
DRNTU::Engineering::Computer science and engineering
url http://hdl.handle.net/10356/72102
work_keys_str_mv AT nguyenquyhy voiceconversionusingdeepneuralnetworks