pyannote-audio
tortoise-tts
pyannote-audio | tortoise-tts | |
---|---|---|
15 | 145 | |
5,077 | 11,819 | |
3.4% | - | |
8.6 | 8.0 | |
3 days ago | 1 day ago | |
Jupyter Notebook | Jupyter Notebook | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
pyannote-audio
-
Open Source Libraries
pyannote/pyannote-audio
-
AI Transcribing tool for video with two voices?
Open Source. I've found this to be pretty nice, which is just a wrapper on some hugging face models https://github.com/pyannote/pyannote-audio
-
Show HN: PodText.ai – Search anything said on a podcast, Highlight text to play
(not the creator, but I've built something similar for personal use)
This is a great library for determining which speaker is speaking during each time in an audio file (this is called speaker diarization); I imagine they used it or something like it. Works really well out of the box!
https://github.com/pyannote/pyannote-audio
-
I wanted to use OpenAI's Whisper speech-to-text on my Mac without installing stuff in the Terminal so I made MacWhisper, a free Mac app to transcribe audio and video files for easy transcription and subtitle generation. Would love to hear some feedback on it!
Do you think pyannote could be implemented in the Pro version of the app to support diarization?
- I won several speaker diarization challenges with pyannote.audio
-
I made a free transcription service powered by Whisper AI
Free startup idea: Use Whisper with pyannote-audio[0]’s speaker diarization. Upload a recording, get back a multi-speaker annotated transcription.
Make a JSON API and I’ll be your first customer.
[0] https://github.com/pyannote/pyannote-audio
-
Can Whisper differentiate between different voices?
Whisper can’t, but pyannote-audio can. I’ve seen a couple of prototypes out there which link the two together.
-
[D] Is there a way to distinguish different human voices from 1 audio file ?
You can use pyannote python library. It will identify different speakers from audio and will create small audio files with those speakers.
- Post-Game Analysis: Destiny & Alex VS Andrew & Zen Shapiro
-
A quick and dirty tool for automatically analyzing speaking time in online debates (Effortpost)
This Colab notebook is basically a standard template (with small changes) provided by pyannote-audio, the library implementing the speaker diarization functionality we need. (template)
tortoise-tts
-
ESpeak-ng: speech synthesizer with more than one hundred languages and accents
The quality also depends on the type of model. I'm not really sure what ESpeak-ng actually uses? The classical TTS approaches often use some statistical model (e.g. HMM) + some vocoder. You can get to intelligible speech pretty easily but the quality is bad (w.r.t. how natural it sounds).
There are better open source TTS models. E.g. check https://github.com/neonbjb/tortoise-tts or https://github.com/NVIDIA/tacotron2. Or here for more: https://www.reddit.com/r/MachineLearning/comments/12kjof5/d_...
- FLaNK Stack Weekly 12 February 2024
-
OpenVoice: Versatile Instant Voice Cloning
I use Tortoise TTS. It's slow, a little clunky, and sometimes the output gets downright weird. But it's the best quality-oriented TTS I've found that I can run locally.
https://github.com/neonbjb/tortoise-tts
- [discussion] text to voice generation for textbooks
- DALL-E 3: Improving image generation with better captions [pdf]
-
Open Source Libraries
neonbjb/tortoise-tts
-
Running Tortoise-TTS - IndexError: List out of range
EDIT: It appears to be the exact same issue as this
-
My Deep Learning Rig
It was primarily being used to train TTS models (see https://github.com/neonbjb/tortoise-tts), which largely fit into a single GPUs memory. So, for data parallelism, x8 PCIe isn't that much of a concern.
-
PlayHT2.0: State-of-the-Art Generative Voice AI Model for Conversational Speech
Previously TortoiseTTS was associated with PlayHT in some way, although the exact connection is a bit vague [0].
From the descriptions here it sounds a lot like AudioLM / SPEAR TTS / some of Meta's recent multilingual TTS approaches, although those models are not open source, sounds like PlayHT's approach is in a similar spirit. The discussion of "mel tokens" is closer to what I would call the classic TTS pipeline in many ways... PlayHT has generally been kind of closed about what they used, would be interesting to know more.
I assume the key factor here is high quality, emotive audio with good data cleaning processes. Probably not even a lot of data, at least in the scale of "a lot" in speech, e.g. ASR (millions of hours) or TTS (hundreds to thousands). As opposed to some radically new architectural piece never before seen in the literature, there are lots of really nice tools for emotive and expressive TTS buried in recent years of publications.
Tacotron 2 is perfectly capable of this type of stuff as well, as shown by Dessa [1] a few years ago (this writeup is a nice intro to TTS concepts). With the limit largely being, at some point you haven't heard certain phonetic sounds before in a voice, and need to do something to get plausible outcomes for new voices.
[0] Discussion here https://github.com/neonbjb/tortoise-tts/issues/182#issuecomm...
[1] https://medium.com/dessa-news/realtalk-how-it-works-94c1afda...
-
Comparing Tortoise and Bark for Voice Synthesis
Tortoise GitHub repo - Source code, documentation, and usage guide
What are some alternatives?
NeMo - A scalable generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (Automatic Speech Recognition and Text-to-Speech)
TTS - 🐸💬 - a deep learning toolkit for Text-to-Speech, battle-tested in research and production
speechbrain - A PyTorch-based Speech Toolkit
bark - 🔊 Text-Prompted Generative Audio Model
Resemblyzer - A python package to analyze and compare voices with deep learning
Real-Time-Voice-Cloning - Clone a voice in 5 seconds to generate arbitrary speech in real-time
Kaldi Speech Recognition Toolkit - kaldi-asr/kaldi is the official location of the Kaldi project.
piper - A fast, local neural text to speech system
inaSpeechSegmenter - CNN-based audio segmentation toolkit. Allows to detect speech, music, noise and speaker gender. Has been designed for large scale gender equality studies based on speech time per gender.
tacotron2 - Tacotron 2 - PyTorch implementation with faster-than-realtime inference
uis-rnn - This is the library for the Unbounded Interleaved-State Recurrent Neural Network (UIS-RNN) algorithm, corresponding to the paper Fully Supervised Speaker Diarization.
larynx - End to end text to speech system using gruut and onnx