DeepFilterNet
tortoise-tts
DeepFilterNet | tortoise-tts | |
---|---|---|
10 | 145 | |
1,969 | 11,881 | |
- | - | |
8.9 | 8.0 | |
9 days ago | 12 days ago | |
Python | Jupyter Notebook | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepFilterNet
-
Anyone know of a good TTS pipeline for raw speech data?
You mean remove background noise and transcribe? Then you can use DeepFilterNet to remove noise, and Whisper to transcribe.
-
Open Source Libraries
Rikorose/DeepFilterNet: A Low Complexity Speech Enhancement Framework for Full-Band Audio (48kHz) using on Deep Filtering
- DeepFilterNet: Noise supression using deep filtering
-
Linux Audio Noise suppression using deep filtering in Rust
It looks like the library in Rust is using `tract-onnx` to do the inference: https://github.com/Rikorose/DeepFilterNet/blob/2a84d2a1750a5... I am wondering whether using Python for research, training in big data center, and Rust at edge for efficient inference would be a trend in the future. We do have a larger community of C++ right now for inference (e.g. ggml). But Rust crate as component to build applications of AI is joy to use.
-
Real-Time Noise Suppression for PipeWire writen in Rust
Repo: https://github.com/Rikorose/DeepFilterNet
tortoise-tts
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
The quality also depends on the type of model. I'm not really sure what ESpeak-ng actually uses? The classical TTS approaches often use some statistical model (e.g. HMM) + some vocoder. You can get to intelligible speech pretty easily but the quality is bad (w.r.t. how natural it sounds).
There are better open source TTS models. E.g. check https://github.com/neonbjb/tortoise-tts or https://github.com/NVIDIA/tacotron2. Or here for more: https://www.reddit.com/r/MachineLearning/comments/12kjof5/d_...
- FLaNK Stack Weekly 12 February 2024
-
OpenVoice: Versatile Instant Voice Cloning
I use Tortoise TTS. It's slow, a little clunky, and sometimes the output gets downright weird. But it's the best quality-oriented TTS I've found that I can run locally.
https://github.com/neonbjb/tortoise-tts
- [discussion] text to voice generation for textbooks
- DALL-E 3: Improving image generation with better captions [pdf]
-
Open Source Libraries
neonbjb/tortoise-tts
-
Running Tortoise-TTS - IndexError: List out of range
EDIT: It appears to be the exact same issue as this
-
My Deep Learning Rig
It was primarily being used to train TTS models (see https://github.com/neonbjb/tortoise-tts), which largely fit into a single GPUs memory. So, for data parallelism, x8 PCIe isn't that much of a concern.
-
PlayHT2.0: State-of-the-Art Generative Voice AI Model for Conversational Speech
Previously TortoiseTTS was associated with PlayHT in some way, although the exact connection is a bit vague [0].
From the descriptions here it sounds a lot like AudioLM / SPEAR TTS / some of Meta's recent multilingual TTS approaches, although those models are not open source, sounds like PlayHT's approach is in a similar spirit. The discussion of "mel tokens" is closer to what I would call the classic TTS pipeline in many ways... PlayHT has generally been kind of closed about what they used, would be interesting to know more.
I assume the key factor here is high quality, emotive audio with good data cleaning processes. Probably not even a lot of data, at least in the scale of "a lot" in speech, e.g. ASR (millions of hours) or TTS (hundreds to thousands). As opposed to some radically new architectural piece never before seen in the literature, there are lots of really nice tools for emotive and expressive TTS buried in recent years of publications.
Tacotron 2 is perfectly capable of this type of stuff as well, as shown by Dessa [1] a few years ago (this writeup is a nice intro to TTS concepts). With the limit largely being, at some point you haven't heard certain phonetic sounds before in a voice, and need to do something to get plausible outcomes for new voices.
[0] Discussion here https://github.com/neonbjb/tortoise-tts/issues/182#issuecomm...
[1] https://medium.com/dessa-news/realtalk-how-it-works-94c1afda...
-
Comparing Tortoise and Bark for Voice Synthesis
Tortoise GitHub repo - Source code, documentation, and usage guide
What are some alternatives?
NoiseTorch - Real-time microphone noise suppression on Linux.
TTS - πΈπ¬ - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
audio-webui - A webui for different audio related Neural Networks
bark - π Text-Prompted Generative Audio Model
noise-repellent - Lv2 suite of plugins for broadband noise reduction
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
PiDTLN - Apply machine learning model DTLN for noise suppression and acoustic echo cancellation on Raspberry Pi
piper - A fast, local neural text to speech system
wenet - Production First and Production Ready End-to-End Speech Recognition Toolkit
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference
rnnoise - Recurrent neural network for audio noise reduction
larynx - End to end text to speech system using gruut and onnx