tacotron2
piper-phonemize
tacotron2 | piper-phonemize | |
---|---|---|
29 | 1 | |
4,925 | 56 | |
0.7% | - | |
0.0 | 7.7 | |
5 months ago | 3 months ago | |
Jupyter Notebook | C++ | |
BSD 3-clause "New" or "Revised" License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tacotron2
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
The quality also depends on the type of model. I'm not really sure what ESpeak-ng actually uses? The classical TTS approaches often use some statistical model (e.g. HMM) + some vocoder. You can get to intelligible speech pretty easily but the quality is bad (w.r.t. how natural it sounds).
There are better open source TTS models. E.g. check https://github.com/neonbjb/tortoise-tts or https://github.com/NVIDIA/tacotron2. Or here for more: https://www.reddit.com/r/MachineLearning/comments/12kjof5/d_...
- [D] What is the best open source text to speech model?
-
[D] The model used in the AI generated Jay-z vocals
Which might use https://github.com/NVIDIA/tacotron2 in their backend
-
Can anyone reccomend any free voice cloning software/websites even if it provided limited word options
One thing is uberduck.ai but I think it's freemium (it's free but some features are premium). There's also tacotron 2.0 and its pytorch page. Many other softwares on sub but tacotron gave this and this and this.
-
Sauron be spitting bars
Maybe we can use AI to hear this rapped by a famous rapper?
-
Kerfuś
Sadly GothicBot the TTS I knew, doesn't exist anymore, but here is an alternative. It works in polish from what I heard.
-
How far are we from being able to clone a singers voice?
From what I’ve seen, NVIDIA’s Tacotron2 can already be used to create some pretty convincing singing.
-
Is it possible to make compelling synthesized speech with fairly low-quality recordings?
You might want to try something like Tacotron 2 by Nvidia to experiment with your current data.
-
What voice-changing apps are available right now?
We have the TorToiSe repo, the SV2TTS repo, and from here you have the other models like Tacotron 2, FastSpeech 2, and such. A there is a lot that goes into training a baseline for these models on the LJSpeech and LibriTTS datasets. Fine tuning is left up to the user.
- The OG (OC)
piper-phonemize
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
Yeah, it would be nice if the financial backing behind Rhasspy/Piper led to improvements in espeak-ng too but based on my own development-related experience with the espeak-ng code base (related elsewhere in the thread) I suspect it would be significantly easier to extract the specific required text to phonemes functionality or (to a certain degree) reimplement it (or use a different project as a base[3]) than to more closely/fully integrate changes with espeak-ng itself[4]. :/
It seems Piper currently abstracts its phonemize-related functionality with a library[0] that currently makes use of a espeak-ng fork[1].
Unfortunately it also seems license-related issues may have an impact[2] on whether Piper continues to make use of espeak-ng.
For your specific example of handling 1984 as a year, my understanding is that espeak-ng can handle situations like that via parameters/configuration but in my experience there can be unexpected interactions between different configuration/API options[6].
[0] https://github.com/rhasspy/piper-phonemize
[1] https://github.com/rhasspy/espeak-ng
[2] https://github.com/rhasspy/piper-phonemize/issues/30#issueco...
[3] Previously I've made note of some potential options here: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
[4] For example, as I note here[5] there's currently at least four different ways to access espeak-ng's phoneme-related functionality--and it seems that they all differ in their output, sometimes consistently and other times dependent on configuration (e.g. audio output mode, spoken punctuation) and probably also input. :/
[5] https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
[6] For example, see my test cases for some other numeric-related configuration options here: https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
What are some alternatives?
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
Voice-Cloning-App - A Python/Pytorch app for easily synthesising human voices
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
waveglow - A Flow-based Generative Network for Speech Synthesis
larynx - End to end text to speech system using gruut and onnx
RHVoice - a free and open source speech synthesizer for Russian and other languages
radtts - Provides training, inference and voice conversion recipes for RADTTS and RADTTS++: Flow-based TTS models with Robust Alignment Learning, Diverse Synthesis, and Generative Modeling and Fine-Grained Control over of Low Dimensional (F0 and Energy) Speech Attributes.
vits - VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
TTS - :robot: :speech_balloon: Deep learning for Text to Speech (Discussion forum: https://discourse.mozilla.org/c/tts)
FastSpeech2 - An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"