CUTE: a Concatenative Method for Voice Conversion Using Exemplar-based Unit Selection

The 41st IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)

Publication date: March 1, 2016

Zeyu Jin, Adam Finkelstein, Stephen DiVerdi, Jingwan (Cynthia) Lu, Gautham Mysore

State-of-the art voice conversion methods re-synthesize voice from spectral representations such as MFCCs and STRAIGHT, thereby introducing muffled artifacts. We propose a method that circumvents this concern using concatenative synthesis coupled with exemplarbased unit selection. Given parallel speech from source and target speakers as well as a new query from the source, our method stitches together pieces of the target voice. It optimizes for three goals: matching the query, using long consecutive segments, and smooth transitions between the segments. To achieve these goals, we perform unit selection at the frame level and introduce triphonebased preselection that greatly reduces computation and enforces selection of long, contiguous pieces. Our experiments show that the proposed method has better quality than baseline methods, while preserving high individuality.

Learn More

Research Areas:  Adobe Research iconAudio Adobe Research iconHuman Computer Interaction