Towards Robust Neural Vocoding for Speech Generation: A Survey

Paper: arXiv
Authors: Po-chun Hsu, Chun-hsuan Wang, Andy T. Liu, Hung-yi Lee
Abstract: Recently, neural vocoders have been widely used in speech synthesis tasks, including text-to-speech and voice conversion. However, when encountering data distribution mismatch between training and inference, neural vocoders trained on real data often degrade in voice quality for unseen scenarios. In this paper, we train four common neural vocoders, including WaveNet, WaveRNN, FFTNet, Parallel WaveGAN alternately on five different datasets. To study the robustness of neural vocoders, we evaluate the models using acoustic features from seen/unseen speakers, seen/unseen languages, a text-to-speech model, and a voice conversion model.
We found out that the speaker variety is much more important for achieving a universal vocoder than the language. Through our experiments, we show that WaveNet and WaveRNN are more suitable for text-to-speech models, while Parallel WaveGAN is more suitable for voice conversion applications. Great amount of subjective MOS results in naturalness for all vocoders are presented for future studies.

MOS Results & Audio Samples

Robustness to Human Speech

Seen Speakers and Seen Language
En_F En_M Ma_F En_L Lrg
WaveNet 4.78±0.10 4.71±0.11 4.63±0.12 4.72±0.10 4.70±0.13
WaveRNN 4.48±0.13 4.61±0.13 4.66±0.11 4.64±0.11 4.61±0.13
FFTNet 3.87±0.17 4.29±0.15 4.45±0.10 3.28±0.19 3.58±0.17
Parallel WaveGAN 4.59±0.12 4.29±0.17 4.41±0.12 4.29±0.15 4.11±0.16

Unseen Speakers and Seen Language
En_F En_M Ma_F En_L Lrg
WaveNet 2.27±0.14 2.86±0.17 3.27±0.16 4.25±0.17 4.35±0.15
WaveRNN 2.60±0.14 2.89±0.15 3.54±0.14 3.98±0.15 3.92±0.16
FFTNet 1.76±0.15 2.21±0.14 2.94±0.13 2.99±0.18 3.13±0.21
Parallel WaveGAN 2.35±0.15 2.85±0.16 2.88±0.14 3.80±0.21 3.85±0.17

Unseen Speakers and Unseen Language
En_F En_M Ma_F En_L Lrg
WaveNet 1.90±0.12 2.53±0.12 3.85±0.15 4.33±0.15 4.33±0.17
WaveRNN 2.53±0.13 2.62±0.12 3.30±0.15 4.30±0.16 4.16±0.17
FFTNet 1.56±0.09 1.75±0.12 2.64±0.16 2.67±0.17 3.37±0.17
Parallel WaveGAN 2.17±0.11 2.54±0.12 2.49±0.13 3.79±0.20 3.97±0.19


The Influence of Genders

Seen Gender and Seen Language
En_M En_F Ma_F
WaveNet 2.41±0.23 3.47±0.24 3.57±0.20
WaveRNN 2.85±0.21 3.49±0.21 4.08±0.20
FFTNet 2.01±0.24 2.45±0.21 3.56±0.14
Parallel WaveGAN 2.68±0.22 3.47±0.20 3.34±0.17

Unseen Gender and Seen Language
En_M En_F Ma_F
WaveNet 2.13±0.16 2.25±0.16 2.98±0.21
WaveRNN 2.36±0.20 2.29±0.15 3.01±0.20
FFTNet 1.52±0.15 1.97±0.20 2.34±0.15
Parallel WaveGAN 2.03±0.17 2.23±0.18 2.41±0.17

Seen Gender and Unseen Language
En_M En_F Ma_F
WaveNet 1.92±0.16 3.05±0.23 4.10±0.22
WaveRNN 2.78±0.18 3.12±0.21 3.77±0.18
FFTNet 1.74±0.17 2.00±0.17 3.40±0.17
Parallel WaveGAN 2.29±0.19 2.92±0.22 2.92±0.21

Unseen Gender and Unseen Language
En_M En_F Ma_F
WaveNet 1.88±0.16 2.01±0.16 3.59±0.20
WaveRNN 2.29±0.17 2.12±0.19 2.84±0.21
FFTNet 1.38±0.11 1.51±0.11 1.91±0.16
Parallel WaveGAN 2.06±0.16 2.17±0.15 2.05±0.17


Text-to-Speech

LJ En_F En_L Lrg Cond
WaveNet 4.10±0.19 2.59±0.24 3.54±0.20 3.66±0.21 4.21±0.16
WaveRNN 4.16±0.18 3.05±0.24 3.32±0.20 3.73±0.19 3.79±0.19
FFTNet 2.75±0.27 2.16±0.29 2.50±0.27 2.28±0.28 2.86±0.30
Parallel WaveGAN 3.81±0.20 3.17±0.21 3.60±0.20 3.19±0.20 3.38±0.20
Ground Truth 4.54±0.16


Voice Conversion

VCTK En_M En_F En_L Lrg
WaveNet 3.15±0.21 3.25±0.23 2.86±0.25 2.85±0.19 2.81±0.21
WaveRNN 3.54±0.20 3.21±0.23 2.98±0.23 2.88±0.22 2.90±0.21
FFTNet 2.71±0.22 2.19±0.21 2.30±0.23 2.28±0.23 2.51±0.21
Parallel WaveGAN 3.83±0.20 3.30±0.23 3.02±0.24 3.45±0.20 3.40±0.21
Griffin Lim 2.72±0.21