STYLER
DiffSinger
STYLER | DiffSinger | |
---|---|---|
3 | 1 | |
150 | 223 | |
- | - | |
1.8 | 10.0 | |
over 2 years ago | about 2 years ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
STYLER
- [D] What is the best open source text to speech model?
-
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
demo: https://keonlee9420.github.io/STYLER-Demo/
-
[R] STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
code: https://github.com/keonlee9420/STYLER
DiffSinger
-
[D] What is the best open source text to speech model?
DiffTTS (DiffSinger) submitted: Apr 3, 2021 paper: https://arxiv.org/pdf/2104.01409v1.pdf github: https://github.com/keonlee9420/DiffSinger
What are some alternatives?
waveglow - A Flow-based Generative Network for Speech Synthesis
radtts - Provides training, inference and voice conversion recipes for RADTTS and RADTTS++: Flow-based TTS models with Robust Alignment Learning, Diverse Synthesis, and Generative Modeling and Fine-Grained Control over of Low Dimensional (F0 and Energy) Speech Attributes.
tacotron - A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
flowtron - Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
Speech-Backbones - This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference
vits - VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech