WhisperFusion
WhisperSpeech
WhisperFusion | WhisperSpeech | |
---|---|---|
3 | 5 | |
1,390 | 3,391 | |
3.0% | 4.0% | |
8.7 | 9.2 | |
about 2 months ago | 8 days ago | |
Python | Jupyter Notebook | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WhisperFusion
- FLaNK Stack 05 Feb 2024
- Show HN: WhisperFusion – Ultra-low latency conversations with an AI chatbot
-
WhisperFusion: Ultra-low latency conversations with an AI chatbot
WhisperFusion is fully open-source - https://github.com/collabora/WhisperFusion
WhisperSpeech
-
OpenVoice: Versatile Instant Voice Cloning
I haven't tried openvoice, but I did try whisperspeech and it will do the same thing. You can optionally pass in a file with a reference voice, and the tts uses it.
https://github.com/collabora/whisperspeech
I found it to be kind of creepy hearing it in my own voice. I also tried a friend of mine who had a french canadian accent and strangely the output didn't have his accent.
-
Show HN: WhisperFusion – Ultra-low latency conversations with an AI chatbot
- WhisperSpeech for the text-to-speech - https://github.com/collabora/WhisperSpeech
and an LLM (phi-2, Mistral, etc.) in between
-
WhisperFusion: Ultra-low latency conversations with an AI chatbot
Hi, I used the [WhisperSpeech](https://github.com/collabora/WhisperSpeech) model for the TTS part after I did some serious torch.compile optimizations to bring the latency down. The Whisper speech recognition and the LLM were optimized through TensorRT-LLM by Marcus and Vineet.
It's not perfect but I am still extremely proud of how it came out. :)
- WhisperSpeech – An Open Source text-to-speech system built by inverting Whisper
-
StyleTTS2 – open-source Eleven Labs quality Text To Speech
I think you’re talking about just using Whisper to annotate audio for a TTS pipeline but someone from Collabora actually created a TTS model directly from Whisper embeddings https://github.com/collabora/WhisperSpeech
What are some alternatives?
metaflow - :rocket: Build and manage real-life ML, AI, and data science projects with ease!
piper - A fast, local neural text to speech system
FLiPStackWeekly - FLaNK AI Weekly covering Apache NiFi, Apache Flink, Apache Kafka, Apache Spark, Apache Iceberg, Apache Ozone, Apache Pulsar, and more...
whisper-ctranslate2 - Whisper command line client compatible with original OpenAI client based on CTranslate2.
stable_diffusion.openvino
monotonic_align - Monotonic Alignment Search
openvino-ai-plugins-gimp - GIMP AI plugins with OpenVINO Backend
VoiceCraft - Zero-Shot Speech Editing and Text-to-Speech in the Wild
onnxruntime - ONNX Runtime: cross-platform, high performance ML inferencing and training accelerator
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
TornadoVM - TornadoVM: A practical and efficient heterogeneous programming framework for managed languages
emotivoice-cli - CLI wrapper around Emotivoice TTS Synthesis