waveglow
radtts
waveglow | radtts | |
---|---|---|
2 | 1 | |
2,222 | 270 | |
0.5% | 0.0% | |
0.0 | 0.0 | |
7 months ago | about 1 year ago | |
Python | Roff | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
waveglow
- [D] What is the best open source text to speech model?
-
XQC Falls for dono thinkng its Adept
I had tried tacotron2 + waveglow and it's quite easy to get very good results. The hardest part is collecting clean data.
radtts
-
[D] What is the best open source text to speech model?
RadTTS submitted: Aug 18, 2021 (NVIDIA page, not Arxiv) paper: https://openreview.net/pdf?id=0NQwnnwAORi github: https://github.com/NVIDIA/radtts
What are some alternatives?
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference
tacotron - A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
flowtron - Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
STYLER - Official repository of STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech, INTERSPEECH 2021
hifi-gan - HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis
Speech-Backbones - This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
FastSpeech2 - An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"
DiffSinger - PyTorch implementation of DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (focused on DiffSpeech)