WhisperSpeech
llm-companion
WhisperSpeech | llm-companion | |
---|---|---|
5 | 2 | |
3,417 | 25 | |
4.7% | - | |
9.2 | 6.7 | |
7 days ago | 4 months ago | |
Jupyter Notebook | JavaScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WhisperSpeech
-
OpenVoice: Versatile Instant Voice Cloning
I haven't tried openvoice, but I did try whisperspeech and it will do the same thing. You can optionally pass in a file with a reference voice, and the tts uses it.
https://github.com/collabora/whisperspeech
I found it to be kind of creepy hearing it in my own voice. I also tried a friend of mine who had a french canadian accent and strangely the output didn't have his accent.
-
Show HN: WhisperFusion – Ultra-low latency conversations with an AI chatbot
- WhisperSpeech for the text-to-speech - https://github.com/collabora/WhisperSpeech
and an LLM (phi-2, Mistral, etc.) in between
-
WhisperFusion: Ultra-low latency conversations with an AI chatbot
Hi, I used the [WhisperSpeech](https://github.com/collabora/WhisperSpeech) model for the TTS part after I did some serious torch.compile optimizations to bring the latency down. The Whisper speech recognition and the LLM were optimized through TensorRT-LLM by Marcus and Vineet.
It's not perfect but I am still extremely proud of how it came out. :)
- WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
I think you’re talking about just using Whisper to annotate audio for a TTS pipeline but someone from Collabora actually created a TTS model directly from Whisper embeddings https://github.com/collabora/WhisperSpeech
llm-companion
-
Show HN: WhisperFusion – Ultra-low latency conversations with an AI chatbot
Oh this is neat! I was wondering how to get whisper to stream-transcribe well. I have a similar project using whisper + styletts with the similar goal to gave minimal delay: https://github.com/lxe/llm-companion
- Show HN: Push-to-talk" + TTS web chat interface with OpenAI-like APIs
What are some alternatives?
piper - A fast, local neural text to speech system
WhisperFusion - WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
whisper-ctranslate2 - Whisper command line client compatible with original OpenAI client based on CTranslate2.
monotonic_align - Monotonic Alignment Search
VoiceCraft - Zero-Shot Speech Editing and Text-to-Speech in the Wild
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
emotivoice-cli - CLI wrapper around Emotivoice TTS Synthesis
tts-generation-webui - TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNet, StyleTTS2, MMS)
RHVoice - a free and open source speech synthesizer for Russian and other languages