You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i tried the pre-trained model--bneSeq2seqMoL-vctk-libritts460-oneshot provided in bneSeq2seqMoL-vctk-libritts460-oneshot.Then converted source wavs to target wavs in provided demo by your paper .Yours in paper performed better than the model trained in the original paper.Why? have u trained the hifi-gan model again?Thank u!
The text was updated successfully, but these errors were encountered:
No, we didn't fine-tune the HiFi-GAN vocoder, we just took the code from this repo as is, with the vocoder checkpoint they provided and the recommended bneSeq2seqMoL-vctk-libritts460-oneshot model. I'm not sure what might go wrong and why you had the results of voice conversion with this model worse the ones from our demo.
I used this model to reproduce the experiment you described (just took source and reference voice samples from our demo) and got the same BNE-PPG-VC results as in our demo. See the results of my experiments here.
Perhaps the reason is that the output audios produced by the voice conversion model from the mentioned repo were loudness-normalized and only then put to our demo here, so in the demo the loudness might be less so the quality might seem better. Also note that in our demo we have 16kHz audio while the BNE-PPG-VC model outputs 24kHZ, so we also downsampled the audio before putting it to our demo. These are the only things I can think of.
i tried the pre-trained model--bneSeq2seqMoL-vctk-libritts460-oneshot provided in bneSeq2seqMoL-vctk-libritts460-oneshot.Then converted source wavs to target wavs in provided demo by your paper .Yours in paper performed better than the model trained in the original paper.Why? have u trained the hifi-gan model again?Thank u!
The text was updated successfully, but these errors were encountered: