tacotron2
notes_public
tacotron2 | notes_public | |
---|---|---|
29 | 4 | |
4,944 | - | |
1.1% | - | |
0.0 | - | |
6 months ago | - | |
Jupyter Notebook | ||
BSD 3-clause "New" or "Revised" License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tacotron2
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
The quality also depends on the type of model. I'm not really sure what ESpeak-ng actually uses? The classical TTS approaches often use some statistical model (e.g. HMM) + some vocoder. You can get to intelligible speech pretty easily but the quality is bad (w.r.t. how natural it sounds).
There are better open source TTS models. E.g. check https://github.com/neonbjb/tortoise-tts or https://github.com/NVIDIA/tacotron2. Or here for more: https://www.reddit.com/r/MachineLearning/comments/12kjof5/d_...
- [D] What is the best open source text to speech model?
-
[D] The model used in the AI generated Jay-z vocals
Which might use https://github.com/NVIDIA/tacotron2 in their backend
-
Can anyone reccomend any free voice cloning software/websites even if it provided limited word options
One thing is uberduck.ai but I think it's freemium (it's free but some features are premium). There's also tacotron 2.0 and its pytorch page. Many other softwares on sub but tacotron gave this and this and this.
-
Sauron be spitting bars
Maybe we can use AI to hear this rapped by a famous rapper?
-
Kerfuś
Sadly GothicBot the TTS I knew, doesn't exist anymore, but here is an alternative. It works in polish from what I heard.
-
How far are we from being able to clone a singers voice?
From what I’ve seen, NVIDIA’s Tacotron2 can already be used to create some pretty convincing singing.
-
Is it possible to make compelling synthesized speech with fairly low-quality recordings?
You might want to try something like Tacotron 2 by Nvidia to experiment with your current data.
-
What voice-changing apps are available right now?
We have the TorToiSe repo, the SV2TTS repo, and from here you have the other models like Tacotron 2, FastSpeech 2, and such. A there is a lot that goes into training a baseline for these models on the LJSpeech and LibriTTS datasets. Fine tuning is left up to the user.
- The OG (OC)
notes_public
-
TTE: Terminal Text Effects
> [...] waiting for one or more terminal emulators to get together and add some ridiculous new escape codes [...]
I'm definitely of the opinion[0] that we haven't yet reached the limits of the "terminal emulator" UX paradigm.
The past few years do seem to have seen a resurgence in terminal emulator innovation due in part to a combination of new languages, the prevalence of GPUs, and a realisation that many of the existing terminal emulators weren't interested in any innovation in certain directions.
I've particularly been interested in the possibilities provided by the Terminal Graphics Protocol (which I discuss more in the linked comment).
A couple of years ago I switched to WezTerm[2] due to a combination of its graphics support, implementation language (Rust) and that its main developer seems to be interested in a combination of both solid support for existing standards & opportunities for innovation.
WezTerm also provides opportunities for customisation both in terms of shell integrations and of the application itself[3].
> [...] new escape codes [...]
Also, on this aspect, it may not even be necessary to create new escape codes--recently I discovered the `terminfo(5)` man page actually makes a pretty interesting read[7], in part because it lists some existing escape codes that seem like they have potential for re-use/re-implementation in the current day's more graphic-based systems.
---- footnotes ----
[0] As I mentioned in a recent comment on a thread[1] here:
"Motivated by the thought that at the current point in time perhaps the 'essence' of a 'terminal' is its linear 'chronological' presentation of input/interaction/output history rather than its use of 'text'."
[1] https://news.ycombinator.com/item?id=40475538
[2] https://wezfurlong.org/wezterm/
[3] While I'm definitely not a fan of the choice of Lua as the extension language, I have now at least hit my head against the wall[4] with it enough that I can actually get more complex custom functionality working.
[4] I've started to write up some of my Lua-related[5] notes & more general WezTerm[6] notes so hopefully it'll eventually be an easier road for others. :)
[5] https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
[6] https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
[7] As one does. :) It was a fascinating/amusing time capsule in terms(!) of mentions of weird hardware terminal quirks that at one time ("before my time") needed to be worked around; interesting escape code discoveries; and, the mention of a term I had not thought of for decades but was at one time of importance: NLQ! :D
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
Yeah, it would be nice if the financial backing behind Rhasspy/Piper led to improvements in espeak-ng too but based on my own development-related experience with the espeak-ng code base (related elsewhere in the thread) I suspect it would be significantly easier to extract the specific required text to phonemes functionality or (to a certain degree) reimplement it (or use a different project as a base[3]) than to more closely/fully integrate changes with espeak-ng itself[4]. :/
It seems Piper currently abstracts its phonemize-related functionality with a library[0] that currently makes use of a espeak-ng fork[1].
Unfortunately it also seems license-related issues may have an impact[2] on whether Piper continues to make use of espeak-ng.
For your specific example of handling 1984 as a year, my understanding is that espeak-ng can handle situations like that via parameters/configuration but in my experience there can be unexpected interactions between different configuration/API options[6].
[0] https://github.com/rhasspy/piper-phonemize
[1] https://github.com/rhasspy/espeak-ng
[2] https://github.com/rhasspy/piper-phonemize/issues/30#issueco...
[3] Previously I've made note of some potential options here: https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
[4] For example, as I note here[5] there's currently at least four different ways to access espeak-ng's phoneme-related functionality--and it seems that they all differ in their output, sometimes consistently and other times dependent on configuration (e.g. audio output mode, spoken punctuation) and probably also input. :/
[5] https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
[6] For example, see my test cases for some other numeric-related configuration options here: https://gitlab.com/RancidBacon/floss-various-contribs/-/blob...
-
The Case for Nushell
I also discovered an existing discussion[1] related to this topic which includes a link[2] to a "helper to call nushell nuon/json/yaml commands from bash/fish/zsh" and a comment[3] that the current nushell dev focus is "on getting the experience inside nushell right and [we] probably won't be able to dedicate design time to get the interface of native Nu commands with an outside POSIX shell right and stable.".
[0] https://gitlab.com/RancidBacon/notes_public/-/blob/main/note...
[1] "Expose some commands to external world #6554": https://github.com/nushell/nushell/issues/6554
[2] https://github.com/cruel-intentions/devshell-files/blob/mast...
[3] https://github.com/nushell/nushell/issues/6554#issuecomment-...
What are some alternatives?
tortoise-tts - A multi-voice TTS system trained with an emphasis on quality
piper - A fast, local neural text to speech system
Voice-Cloning-App - A Python/Pytorch app for easily synthesising human voices
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
waveglow - A Flow-based Generative Network for Speech Synthesis
larynx - End to end text to speech system using gruut and onnx
radtts - Provides training, inference and voice conversion recipes for RADTTS and RADTTS++: Flow-based TTS models with Robust Alignment Learning, Diverse Synthesis, and Generative Modeling and Fine-Grained Control over of Low Dimensional (F0 and Energy) Speech Attributes.
vits - VITS: Conditional Variational Autoencoder with Adversarial Learning for End-to-End Text-to-Speech
RHVoice - a free and open source speech synthesizer for Russian and other languages
FastSpeech2 - An implementation of Microsoft's "FastSpeech 2: Fast and High-Quality End-to-End Text to Speech"