ubisoft-laforge-daft-exprt
Parallel-Tacotron2
ubisoft-laforge-daft-exprt | Parallel-Tacotron2 | |
---|---|---|
3 | 1 | |
114 | 184 | |
0.0% | - | |
0.0 | 0.0 | |
about 1 year ago | over 2 years ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ubisoft-laforge-daft-exprt
-
Using deepfake voice programms for devolopement - Possible/practical?
Ubisoft has their Daft-Exprt stuff on github that does a tolerable job of prosody/tone transfer, which is pretty much necessary to naturalize shit if you're going to be doing a cloning pipeline that isn't using a service's packaged voices. Without this I wouldn't even consider an ai speech pipeline due to how hardly constrained the range of tone is even with something like replicant studios actor shit.
-
Using A.I voices or Sound Fonts (i.e. Undertale or Animal Crossing)
Ubisoft has some stuff that works to naturalize pretty well via prosody transfer https://github.com/ubisoft/ubisoft-laforge-daft-exprt
-
Anyone have experience with AI voices?
Prosody transfer (I use https://github.com/ubisoft/ubisoft-laforge-daft-exprt), use an example speech segment to change the timing, intonation, and other properties of a different segment of speech. Such as taking evenly paced ML generated speech, and turning it into Captain Kirk iambic pentameter.
Parallel-Tacotron2
What are some alternatives?
TTS - πΈπ¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
FastSpeech2 - An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"
hifi-gan - HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
STYLER - Official repository of STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech, INTERSPEECH 2021
how-do-vits-work - (ICLR 2022 Spotlight) Official PyTorch implementation of "How Do Vision Transformers Work?"
vits - VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
WaveRNN - WaveRNN Vocoder + TTS
EmotiVoice - EmotiVoice π: a Multi-Voice and Prompt-Controlled TTS Engine
marytts - MARY TTS -- an open-source, multilingual text-to-speech synthesis system written in pure java
TensorFlowTTS - :stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, French, Korean, Chinese, German and Easy to adapt for other languages)