StarGANv2-VC
tt-vae-gan
StarGANv2-VC | tt-vae-gan | |
---|---|---|
3 | 4 | |
458 | 64 | |
- | - | |
1.3 | 1.8 | |
12 months ago | over 2 years ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
StarGANv2-VC
-
[D] What's the best speech to speech deep fake voice project?
So far I've only been able to find StarGANv2. Which one redditor used to create this. Is this the best there is or are there better alternatives?
-
[R] State-of-the-art voice cloning
I used this to make this
-
Use deep fake tech to say stuff with your favorite characters
This looks like it was previously known as Vocodes, made by echelon who is here on HN:
https://news.ycombinator.com/item?id=23965787
The code repos used are listed in their credits section, and it looks like a mixture of (customised?) Tacotron2, Glow-TTS, HifGan, and others. Videos are generated using Wav2Lip.
Text-To-Speech (TTS) has improved greatly over the past several years, but there's still a lot of metallic sounds in "pure" TTS implementations. I've started exploring voice style conversion, otherwise known as "voice cloning", and there are some interesting repos out there with decent results. These work differently from TTS, in that you don't type out the text to be spoken, but rather pass in an audio file of what you want the cloned speaker to say, and the system outputs an audio file with the same sounds (words, intonation) but with a different speaker identity.
This may be easier to get the right cadence and emotion in the generated audio, as text doesn't capture proper emotion and intonation. I suspect game character audio will use more of voice-style conversion instead of pure TTS simply to get the right emotional cadence of the lines being delivered.
Some interesting voice style conversion repos (in no order, just a random selection if anyone is interested in exploring):
https://github.com/yl4579/StarGANv2-VC
tt-vae-gan
- Use deep fake tech to say stuff with your favorite characters
-
[Project] I've successfully applied a VAE-GAN model (initially for voice conversion ) to the problem of timbre transfer between musical instruments. This showcases the generalisability of the approach with the potential for more than just one audio style transfer problem.
Link for demonstration, code, and tutorial: https://github.com/RussellSB/tt-vae-gan
-
[P] Voice Conversion VAE-Cycle-GAN on Melspectrograms
I've been working on an open source implementation for the past month. The link for it can be found here. But have since been struggling with a load of mode collapse - or the model outputting "blurry" spectrograms not quite capturing the same initial structure as they should.
What are some alternatives?
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
jukebox - Code for the paper "Jukebox: A Generative Model for Music"
YourTTS - YourTTS: Towards Zero-Shot Multi-Speaker TTS and Zero-Shot Voice Conversion for everyone
PyTorch-GAN - PyTorch implementations of Generative Adversarial Networks.
espnet - End-to-End Speech Processing Toolkit
CoGAN
autovc - AutoVC: Zero-Shot Voice Style Transfer with Only Autoencoder Loss
voice_conversion