A COMPARISON OF DISCRETE AND SOFT SPEECH UNITS FOR IMPROVED VOICE CONVERSION
Authors: Benjamin van Niekerk, Marc-André Carbonneau, Julian Zaïdi, Matthew Baas, Hugo Seuté, Herman Kamper
Abstract: The goal of voice conversion is to transform source speech into a target voice, keeping the content unchanged. In this paper, we focus on self-supervised representation learning for voice conversion. Specifically, we compare discrete and soft speech units as input features. We find that discrete representations effectively remove speaker information but discard some linguistic content - leading to mispronunciations. As a solution, we propose soft speech units. To learn soft units, we predict a distribution over discrete speech units. By modeling uncertainty, soft units capture more content information, improving the intelligibility and naturalness of converted speech.
If you are having trouble listening to the audio, try refreshing the page.
Intra-lingual - English
In this section, we present some speech samples used in the intra-lingual subjective evaluation.
We focus on any-to-one conversion using LJSpeech as the target and LibriSpeech dev-clean as the source speech.
We compare discrete and soft speech units as well as two baselines.
Cross-lingual - French
Cross-lingual - Afrikaans
In this section, we present some cross-lingual speech samples for Afrikaans (one of South Africa's official
We use LJSpeech as the target and a South African languages corpus as source speech.
We compare HuBERT-soft and HuBERT-discrete.
Copyright © 2021 Ubisoft Entertainment Inc. All rights reserved.