STYLER
flowtron
STYLER | flowtron | |
---|---|---|
3 | 6 | |
150 | 881 | |
- | 0.3% | |
1.8 | 0.0 | |
over 2 years ago | 10 months ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
STYLER
- [D] What is the best open source text to speech model?
-
STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
demo: https://keonlee9420.github.io/STYLER-Demo/
-
[R] STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech
code: https://github.com/keonlee9420/STYLER
flowtron
- [D] What is the best open source text to speech model?
- A thought: we need language and voice synthesis models as free as Stable Diffusion
-
Ask HN: Best FOSS software to read text allowed
If you want free (as open source) software, the NVIDIA research GitHub also has some good tools. For example : https://github.com/NVIDIA/flowtron
-
Visas Marr on the tragedy of Darth Plagueis
Voice in this video was synthesized using a Flowtron trained on Visas' speech patterns.(https://github.com/NVIDIA/flowtron)
-
Bastila Shan reads the Sith and Jedi Codes
The voicelines in this video was created using a Flowtron Text-to-Speech (TTS) model trained on Bastila's voice patterns to read the Sith and Jedi Codes. For more information: https://github.com/NVIDIA/flowtron I created a small tutorial for how to use it on Google Colab: https://www.youtube.com/watch?v=1Bmg1c5U5Bg
-
I created a Text-to-Speech model based on Bastila's voice patterns.
For more information on Flowtron: https://github.com/NVIDIA/flowtron/
What are some alternatives?
waveglow - A Flow-based Generative Network for Speech Synthesis
TensorFlowTTS - :stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, French, Korean, Chinese, German and Easy to adapt for other languages)
radtts - Provides training, inference and voice conversion recipes for RADTTS and RADTTS++: Flow-based TTS models with Robust Alignment Learning, Diverse Synthesis, and Generative Modeling and Fine-Grained Control over of Low Dimensional (F0 and Energy) Speech Attributes.
tacotron - A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
Speech-Backbones - This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
espnet - End-to-End Speech Processing Toolkit
WaveRNN - WaveRNN Vocoder + TTS
DiffSinger - PyTorch implementation of DiffSinger: Singing Voice Synthesis via Shallow Diffusion Mechanism (focused on DiffSpeech)
espeak-ng - eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents.
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)