WhisperSpeech
emotivoice-cli
WhisperSpeech | emotivoice-cli | |
---|---|---|
5 | 1 | |
3,417 | 5 | |
4.7% | - | |
9.2 | 6.2 | |
7 days ago | 4 months ago | |
Jupyter Notebook | JavaScript | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WhisperSpeech
-
OpenVoice: Versatile Instant Voice Cloning
I haven't tried openvoice, but I did try whisperspeech and it will do the same thing. You can optionally pass in a file with a reference voice, and the tts uses it.
https://github.com/collabora/whisperspeech
I found it to be kind of creepy hearing it in my own voice. I also tried a friend of mine who had a french canadian accent and strangely the output didn't have his accent.
-
Show HN: WhisperFusion – Ultra-low latency conversations with an AI chatbot
- WhisperSpeech for the text-to-speech - https://github.com/collabora/WhisperSpeech
and an LLM (phi-2, Mistral, etc.) in between
-
WhisperFusion: Ultra-low latency conversations with an AI chatbot
Hi, I used the [WhisperSpeech](https://github.com/collabora/WhisperSpeech) model for the TTS part after I did some serious torch.compile optimizations to bring the latency down. The Whisper speech recognition and the LLM were optimized through TensorRT-LLM by Marcus and Vineet.
It's not perfect but I am still extremely proud of how it came out. :)
- WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
I think you’re talking about just using Whisper to annotate audio for a TTS pipeline but someone from Collabora actually created a TTS model directly from Whisper embeddings https://github.com/collabora/WhisperSpeech
emotivoice-cli
-
WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
Interested to see how it performs for Mandarin Chinese speech synthesis, especially with prosody and emotion. The highest quality open source model I've seen so far is EmotiVoice[0], which I've made a CLI wrapper around to generate audio for flashcards.[1] For EmotiVoice, you can apparently also clone your own voice with a GPU, but I have not tested this.[2]
[0] https://github.com/netease-youdao/EmotiVoice
[1] https://github.com/siraben/emotivoice-cli
[2] https://github.com/netease-youdao/EmotiVoice/wiki/Voice-Clon...
What are some alternatives?
piper - A fast, local neural text to speech system
WhisperFusion - WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
whisper-ctranslate2 - Whisper command line client compatible with original OpenAI client based on CTranslate2.
monotonic_align - Monotonic Alignment Search
VoiceCraft - Zero-Shot Speech Editing and Text-to-Speech in the Wild
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
tts-generation-webui - TTS Generation Web UI (Bark, MusicGen + AudioGen, Tortoise, RVC, Vocos, Demucs, SeamlessM4T, MAGNet, StyleTTS2, MMS)
RHVoice - a free and open source speech synthesizer for Russian and other languages