ubisoft-laforge-daft-exprt
STYLER
ubisoft-laforge-daft-exprt | STYLER | |
---|---|---|
3 | 3 | |
114 | 150 | |
0.0% | - | |
0.0 | 1.8 | |
about 1 year ago | over 2 years ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ubisoft-laforge-daft-exprt
-
Using deepfake voice programms for devolopement - Possible/practical?
Ubisoft has their Daft-Exprt stuff on github that does a tolerable job of prosody/tone transfer, which is pretty much necessary to naturalize shit if you're going to be doing a cloning pipeline that isn't using a service's packaged voices. Without this I wouldn't even consider an ai speech pipeline due to how hardly constrained the range of tone is even with something like replicant studios actor shit.
-
Using A.I voices or Sound Fonts (i.e. Undertale or Animal Crossing)
Ubisoft has some stuff that works to naturalize pretty well via prosody transfer https://github.com/ubisoft/ubisoft-laforge-daft-exprt
-
Anyone have experience with AI voices?
Prosody transfer (I use https://github.com/ubisoft/ubisoft-laforge-daft-exprt), use an example speech segment to change the timing, intonation, and other properties of a different segment of speech. Such as taking evenly paced ML generated speech, and turning it into Captain Kirk iambic pentameter.
STYLER
- [D] What is the best open source text to speech model?
-
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
demo: https://keonlee9420.github.io/STYLER-Demo/
-
[R] STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
code: https://github.com/keonlee9420/STYLER
What are some alternatives?
TTS - πΈπ¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
waveglow - A Flow-based Generative Network for Speech Synthesis
hifi-gan - HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
radtts - Provides training, inference and voice conversion recipes for RADTTS and RADTTS++: Flow-based TTS models with Robust Alignment Learning, Diverse Synthesis, and Generative Modeling and Fine-Grained Control over of Low Dimensional (F0 and Energy) Speech Attributes.
vits - VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
flowtron - Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
Parallel-Tacotron2 - PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
Speech-Backbones - This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
EmotiVoice - EmotiVoice π: a Multi-Voice and Prompt-Controlled TTS Engine
tacotron - A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
DiffSinger - PyTorch implementation of DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (focused on DiffSpeech)
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference