opentts
tortoise-tts
opentts | tortoise-tts | |
---|---|---|
10 | 145 | |
822 | 11,819 | |
- | - | |
1.3 | 8.0 | |
about 1 month ago | about 22 hours ago | |
Python | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
opentts
-
Is Sampling Dictionary Text To Speech Allowed?
I think using something like openTTS might be safer. Though I'm pretty sure no one will ever find out you used their online tts.
-
Home Assistant’s Year of the Voice – Chapter 2
The most exciting thing about Home Assistant's "Year of the Voice", for me, is that it is apparently enabling/supporting @synesthesiam's continued phenomenal contributions to the FLOSS off-line voice synthesis space.
The quality, variety & diversity of voices that synesthesiam's "Larynx" TTS project (https://github.com/rhasspy/larynx/) made available, completely transformed the Free/Open Source Text To Speech landscape.
In addition "OpenTTS" (https://github.com/synesthesiam/opentts) provided a common API for interacting with multiple FLOSS TTS projects which showed great promise for actually enabling "standing on the shoulders of" rather than re-inventing the same basic functionality every time.
The new "Piper" TTS project mentioned in the article is the apparent successor to Larynx and, along with the accompanying LibriTTS/LibriVox-based voice models, brings to FLOSS TTS something it's never had before:
* Too many voices! :)
Seriously, the current LibriTTS voice model version has 900+ voices (of varying quality levels), how do you even navigate that many?![0]
And that's not even considering the even higher quality single speaker models based on other audio recording sources.
Offline TTS while immensely valuable for individuals, doesn't seem to be attractive domain for most commercial entities due to lack of lock-in/telemetry opportunities so I was concerned that we might end up missing out on further valuable contributions from synesthesiam's specialised skills & experience due to financial realities & the human need for food. :)
I'm glad we instead get to see what happens next.
[0] See my follow-up comment about this.
-
Free text-to-speech software (or low budget)
Yes, if you scroll down on the github page you can read the extensive README.md file on its setup.
-
Use OpenTTS for Android
I was wondering if there was a way to use a private OpenTTS server for the Android Text-To-Speech engine.
-
Ask HN: Are there any good open source Text-to-Speech tools?
If your use case allows for a web API, I've had good experience running OpenTTS[0].
It packages several models, including Coqui AI's TTS which I tend to use the most. There's a handy Docker image, too.
[0] https://github.com/synesthesiam/opentts
-
gosling: natural sounding text-to-speech in the terminal
https://github.com/synesthesiam/opentts is run through Docker, which is pretty simple, and provides a GUI in the browser. There is a good selection of voice engines and voices, and the local Web server has API endpoints. I've been using this on Linux Mint lately.
-
NaturalSpeech: End-to-End Text to Speech Synthesis with Human-Level Quality
If you've not already encountered them I'd definitely encourage you to check out these Free/Open Source projects too:
* Larynx: https://github.com/rhasspy/larynx/
* OpenTTS: https://github.com/synesthesiam/opentts
* Likely Mimic3 in the near future: https://mycroft.ai/blog/mimic-3-preview/
Larynx in particular has a focus on "faster than real-time" while OpenTTS is an attempt to package & provide common REST API to all Free/Open Source Text To Speech systems so the FLOSS ecosystem can build on previous work supported by short-lived business interests, rather than start from scratch every time.
AIUI the developer of the first two projects now works for Mycroft AI & is involved in the development of Mimic3 which seems very promising given how much of an impact on quality his solo work has had in just the past couple of years or so.
-
Standalone apps / redistributable docker?
I haven't personally dealt with Docker much, but am trying to make use of some open source stuff that seems to require Docker to run (https://github.com/synesthesiam/opentts).
tortoise-tts
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
The quality also depends on the type of model. I'm not really sure what ESpeak-ng actually uses? The classical TTS approaches often use some statistical model (e.g. HMM) + some vocoder. You can get to intelligible speech pretty easily but the quality is bad (w.r.t. how natural it sounds).
There are better open source TTS models. E.g. check https://github.com/neonbjb/tortoise-tts or https://github.com/NVIDIA/tacotron2. Or here for more: https://www.reddit.com/r/MachineLearning/comments/12kjof5/d_...
- FLaNK Stack Weekly 12 February 2024
-
OpenVoice: Versatile Instant Voice Cloning
I use Tortoise TTS. It's slow, a little clunky, and sometimes the output gets downright weird. But it's the best quality-oriented TTS I've found that I can run locally.
https://github.com/neonbjb/tortoise-tts
- [discussion] text to voice generation for textbooks
- DALL-E 3: Improving image generation with better captions [pdf]
-
Open Source Libraries
neonbjb/tortoise-tts
-
Running Tortoise-TTS - IndexError: List out of range
EDIT: It appears to be the exact same issue as this
-
My Deep Learning Rig
It was primarily being used to train TTS models (see https://github.com/neonbjb/tortoise-tts), which largely fit into a single GPUs memory. So, for data parallelism, x8 PCIe isn't that much of a concern.
-
PlayHT2.0: State-of-the-Art Generative Voice AI Model for Conversational Speech
Previously TortoiseTTS was associated with PlayHT in some way, although the exact connection is a bit vague [0].
From the descriptions here it sounds a lot like AudioLM / SPEAR TTS / some of Meta's recent multilingual TTS approaches, although those models are not open source, sounds like PlayHT's approach is in a similar spirit. The discussion of "mel tokens" is closer to what I would call the classic TTS pipeline in many ways... PlayHT has generally been kind of closed about what they used, would be interesting to know more.
I assume the key factor here is high quality, emotive audio with good data cleaning processes. Probably not even a lot of data, at least in the scale of "a lot" in speech, e.g. ASR (millions of hours) or TTS (hundreds to thousands). As opposed to some radically new architectural piece never before seen in the literature, there are lots of really nice tools for emotive and expressive TTS buried in recent years of publications.
Tacotron 2 is perfectly capable of this type of stuff as well, as shown by Dessa [1] a few years ago (this writeup is a nice intro to TTS concepts). With the limit largely being, at some point you haven't heard certain phonetic sounds before in a voice, and need to do something to get plausible outcomes for new voices.
[0] Discussion here https://github.com/neonbjb/tortoise-tts/issues/182#issuecomm...
[1] https://medium.com/dessa-news/realtalk-how-it-works-94c1afda...
-
Comparing Tortoise and Bark for Voice Synthesis
Tortoise GitHub repo - Source code, documentation, and usage guide
What are some alternatives?
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
vosk-api - Offline speech recognition API for Android, iOS, Raspberry Pi and servers with Python, Java, C# and Node
bark - 🔊 Text-Prompted Generative Audio Model
Thorsten-Voice - Thorsten-Voice: A free to use, offline working, high quality german TTS voice should be available for every project without any license struggling.
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
larynx - End to end text to speech system using gruut and onnx
piper - A fast, local neural text to speech system
coral-pi-rest-server - Perform inferencing of tensorflow-lite models on an RPi with acceleration from Coral USB stick
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.