STYLER
ubisoft-laforge-daft-exprt
STYLER | ubisoft-laforge-daft-exprt | |
---|---|---|
3 | 3 | |
150 | 114 | |
- | 0.0% | |
1.8 | 0.0 | |
over 2 years ago | about 1 year ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
STYLER
- [D] What is the best open source text to speech model?
-
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
demo: https://keonlee9420.github.io/STYLER-Demo/
-
[R] STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
code: https://github.com/keonlee9420/STYLER
ubisoft-laforge-daft-exprt
-
Using deepfake voice programms for devolopement - Possible/practical?
Ubisoft has their Daft-Exprt stuff on github that does a tolerable job of prosody/tone transfer, which is pretty much necessary to naturalize shit if you're going to be doing a cloning pipeline that isn't using a service's packaged voices. Without this I wouldn't even consider an ai speech pipeline due to how hardly constrained the range of tone is even with something like replicant studios actor shit.
-
Using A.I voices or Sound Fonts (i.e. Undertale or Animal Crossing)
Ubisoft has some stuff that works to naturalize pretty well via prosody transfer https://github.com/ubisoft/ubisoft-laforge-daft-exprt
-
Anyone have experience with AI voices?
Prosody transfer (I use https://github.com/ubisoft/ubisoft-laforge-daft-exprt), use an example speech segment to change the timing, intonation, and other properties of a different segment of speech. Such as taking evenly paced ML generated speech, and turning it into Captain Kirk iambic pentameter.
What are some alternatives?
waveglow - A Flow-based Generative Network for Speech Synthesis
TTS - πΈπ¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
radtts - Provides training, inference and voice conversion recipes for RADTTS and RADTTS++: Flow-based TTS models with Robust Alignment Learning, Diverse Synthesis, and Generative Modeling and Fine-Grained Control over of Low Dimensional (F0 and Energy) Speech Attributes.
hifi-gan - HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
flowtron - Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
vits - VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
Speech-Backbones - This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
Parallel-Tacotron2 - PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
tacotron - A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
EmotiVoice - EmotiVoice π: a Multi-Voice and Prompt-Controlled TTS Engine
DiffSinger - PyTorch implementation of DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (focused on DiffSpeech)
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference