DiffSinger
FastSpeech2
DiffSinger | FastSpeech2 | |
---|---|---|
1 | 4 | |
223 | 1,631 | |
- | - | |
10.0 | 0.0 | |
over 2 years ago | 7 months ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DiffSinger
-
[D] What is the best open source text to speech model?
DiffTTS (DiffSinger) submitted: Apr 3, 2021 paper: https://arxiv.org/pdf/2104.01409v1.pdf github: https://github.com/keonlee9420/DiffSinger
FastSpeech2
-
[D] What is the best open source text to speech model?
FastSpeech2 submitted: Jun 8, 2020 paper: https://arxiv.org/pdf/2006.04558.pdf github: https://github.com/ming024/FastSpeech2 (Not the official implementation but is the once cited the most)
-
What voice-changing apps are available right now?
We have the TorToiSe repo, the SV2TTS repo, and from here you have the other models like Tacotron 2, FastSpeech 2, and such. A there is a lot that goes into training a baseline for these models on the LJSpeech and LibriTTS datasets. Fine tuning is left up to the user.
- I'm looking for something self-hosted, preferably linux-based (though win or mac will work too), that will allow me to train a 'voice model' with pre-recorded speech, and then replicate it from text of my choice.
-
Voice-cloning library for conlangs?
As for synthesis of text using your own voice - you can dig into Real Time Voice Cloning or maybe FastSpeech2, but I am not sure if you can use it with conlangs (and because of ML nature, you need many, many, many training data to get anything interesting).
What are some alternatives?
radtts - Provides training, inference and voice conversion recipes for RADTTS and RADTTS++: Flow-based TTS models with Robust Alignment Learning, Diverse Synthesis, and Generative Modeling and Fine-Grained Control over of Low Dimensional (F0 and Energy) Speech Attributes.
Parallel-Tacotron2 - PyTorch Implementation of Google's Parallel Tacotron 2: A Non-Autoregressive Neural TTS Model with Differentiable Duration Modeling
tacotron - A TensorFlow implementation of Google's Tacotron speech synthesis with pre-trained model (unofficial)
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
Speech-Backbones - This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference
voice100 - Voice100 includes neural TTS/ASR models. Inference of Voice100 is low cost as its models are tiny and only depend on CNN without autoregression.
STYLER - Official repository of STYLER: Style Factor Modeling with Rapidity and Robustness via Speech Decomposition for Expressive and Controllable Neural Text to Speech, INTERSPEECH 2021
flowtron - Flowtron is an auto-regressive flow-based generative network for text to speech synthesis with control over speech variation and style transfer
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality