FastSpeech2
flowtron
FastSpeech2 | flowtron | |
---|---|---|
4 | 6 | |
1,622 | 881 | |
- | 0.3% | |
0.0 | 0.0 | |
6 months ago | 10 months ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
FastSpeech2
-
[D] What is the best open source text to speech model?
FastSpeech2 submitted: Jun 8, 2020 paper: https://arxiv.org/pdf/2006.04558.pdf github: https://github.com/ming024/FastSpeech2 (Not the official implementation but is the once cited the most)
-
What voice-changing apps are available right now?
We have the TorToiSe repo, the SV2TTS repo, and from here you have the other models like Tacotron 2, FastSpeech 2, and such. A there is a lot that goes into training a baseline for these models on the LJSpeech and LibriTTS datasets. Fine tuning is left up to the user.
- I'm looking for something self-hosted, preferably linux-based (though win or mac will work too), that will allow me to train a 'voice model' with pre-recorded speech, and then replicate it from text of my choice.
-
Voice-cloning library for conlangs?
As for synthesis of text using your own voice - you can dig into Real Time Voice Cloning or maybe FastSpeech2, but I am not sure if you can use it with conlangs (and because of ML nature, you need many, many, many training data to get anything interesting).
flowtron
- [D] What is the best open source text to speech model?
- A thought: we need language and voice synthesis models as free as Stable Diffusion
-
Ask HN: Best FOSS software to read text allowed
If you want free (as open source) software, the NVIDIA research GitHub also has some good tools. For example : https://github.com/NVIDIA/flowtron
-
Visas Marr on the tragedy of Darth Plagueis
Voice in this video was synthesized using a Flowtron trained on Visas' speech patterns.(https://github.com/NVIDIA/flowtron)
-
Bastila Shan reads the Sith and Jedi Codes
The voicelines in this video was created using a Flowtron Text-to-Speech (TTS) model trained on Bastila's voice patterns to read the Sith and Jedi Codes. For more information: https://github.com/NVIDIA/flowtron I created a small tutorial for how to use it on Google Colab: https://www.youtube.com/watch?v=1Bmg1c5U5Bg
-
I created a Text-to-Speech model based on Bastila's voice patterns.
For more information on Flowtron: https://github.com/NVIDIA/flowtron/
What are some alternatives?
Parallel-Tacotron2 - PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
TensorFlowTTS - :stuck_out_tongue_closed_eyes: TensorFlowTTS: Real-Time State-of-the-art Speech Synthesis for Tensorflow 2 (supported including English, French, Korean, Chinese, German and Easy to adapt for other languages)
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
tacotron - A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference
espnet - End-to-End Speech Processing Toolkit
voice100 - Voice100 includes neural TTS/ASR models. Inference of Voice100 is low cost as its models are tiny and only depend on CNN without autoregression.
WaveRNN - WaveRNN Vocoder + TTS
espeak-ng - eSpeak NG is an open source speech synthesizer that supports more than hundred languages and accents.
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)