WhisperSpeech
whisper-ctranslate2
WhisperSpeech | whisper-ctranslate2 | |
---|---|---|
5 | 3 | |
3,417 | 763 | |
4.7% | 5.8% | |
9.2 | 8.3 | |
7 days ago | 7 days ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WhisperSpeech
-
OpenVoice: Versatile Instant Voice Cloning
I haven't tried openvoice, but I did try whisperspeech and it will do the same thing. You can optionally pass in a file with a reference voice, and the tts uses it.
https://github.com/collabora/whisperspeech
I found it to be kind of creepy hearing it in my own voice. I also tried a friend of mine who had a french canadian accent and strangely the output didn't have his accent.
-
Show HN: WhisperFusion – Ultra-low latency conversations with an AI chatbot
- WhisperSpeech for the text-to-speech - https://github.com/collabora/WhisperSpeech
and an LLM (phi-2, Mistral, etc.) in between
-
WhisperFusion: Ultra-low latency conversations with an AI chatbot
Hi, I used the [WhisperSpeech](https://github.com/collabora/WhisperSpeech) model for the TTS part after I did some serious torch.compile optimizations to bring the latency down. The Whisper speech recognition and the LLM were optimized through TensorRT-LLM by Marcus and Vineet.
It's not perfect but I am still extremely proud of how it came out. :)
- WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
I think you’re talking about just using Whisper to annotate audio for a TTS pipeline but someone from Collabora actually created a TTS model directly from Whisper embeddings https://github.com/collabora/WhisperSpeech
whisper-ctranslate2
-
Firefox slow to load YouTube? Just another front in Google's war on ad blockers
Much better, actually. Try the large-v3 model, it's great. I use it via whisper-ctranslate2 which is a faster implementation.
https://github.com/Softcatala/whisper-ctranslate2
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
There's several faster ones out there. I've been using https://github.com/Softcatala/whisper-ctranslate2 which includes a nice --live_transcribe flag. It's not as good as running it on a complete file but it's been helpful to get the gist of foreign language live streams.
- Transcribing your Interview Data
What are some alternatives?
piper - A fast, local neural text to speech system
llama.cpp - LLM inference in C/C++
WhisperFusion - WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
whisper-openai-gradio-implementation - Whisper is an automatic speech recognition (ASR) system Gradio Web UI Implementation
monotonic_align - Monotonic Alignment Search
whisper-playground - Build real time speech2text web apps using OpenAI's Whisper https://openai.com/blog/whisper/
VoiceCraft - Zero-Shot Speech Editing and Text-to-Speech in the Wild
whisper-standalone-win - Whisper & Faster-Whisper standalone executables for those who don't want to bother with Python.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
whisper-subtitles-webui - A gradio interface for making transcribed and translated subtitles for videos
emotivoice-cli - CLI wrapper around Emotivoice TTS Synthesis