WhisperLive
returnn-experiments
WhisperLive | returnn-experiments | |
---|---|---|
4 | 2 | |
1,253 | 152 | |
17.0% | 1.3% | |
9.4 | 6.4 | |
7 days ago | 6 months ago | |
Python | Python | |
MIT License | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WhisperLive
-
Show HN: WhisperFusion β Ultra-low latency conversations with an AI chatbot
Everything runs locally, we use:
- WhisperLive for the transcription - https://github.com/collabora/WhisperLive
-
WhisperSpeech β An Open Source text-to-speech system built by inverting Whisper
Check out WhisperLive: https://github.com/collabora/WhisperLive
If you're grappling with the slow march from cool tech demos to real-world language model apps, you might wanna check out WhisperLive. It's this rad open-source project thatβs all about leveraging Whisper models for slick live transcription. Think real-time, on-the-fly translated captions for those global meetups. It's a neat example of practical, user-focused tech in action. Dive into the details on their GitHub page
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
https://github.com/collabora/WhisperLive
The is another one that uses huggingface's implementation, but I haven't tried it since my spec doesn't support flash-att2
-
Triple Threat: The Power of Transcription, Summary, and Translation
Curious to see how this works? Check out our demo page - https://col.la/transcription to generate your own transcription, summary, and translation, or use our browser extension - https://github.com/collabora/WhisperLive to get live transcriptions.
returnn-experiments
-
Show HN: WhisperFusion β Ultra-low latency conversations with an AI chatbot
The code is all released already. You find it here: https://github.com/rwth-i6/returnn-experiments/tree/master/2...
This is TensorFlow-based. But I also have another PyTorch-based implementation already, also public (inside our other repo, i6_experiments). It's not so easy currently to set this up, but I'm working on a simpler pipeline in PyTorch.
We don't have the models online yet, but we can upload them later. But I'm not sure how useful they are outside of research, as they are specifically for those research tasks (Librispeech, Tedlium), and probably don't perform too well on other data.
-
Minimal PyTorch re-implementation of GPT
This works for an architecture which has been well tuned and studied before, like LSTM or Transformer.
Once you do research on the model, testing out things, it often tends to become such kwarg monster in many frameworks.
Having everything (relevant) in one file (even in the config file itself with hyper params) allows you to copy the file for every experiment and modify it inplace. This avoids the kwargs mess. But then the config files are very complex, and can become messy in other ways (esp for research projects). Example: https://github.com/rwth-i6/returnn-experiments/blob/master/2...
Such approach makes it much more flexible and does not mess with the baseline code. As you say, it's more like an evolutionary DNA-like approach, where you then tend to do crossovers with other evolved good-performing configs, etc.
What are some alternatives?
cog-whisper-diarization - Cog implementation of transcribing + diarization pipeline with Whisper & Pyannote
minGPT - A minimal PyTorch re-implementation of the OpenAI GPT (Generative Pretrained Transformer) training
whisper-writer - π¬π A small dictation app using OpenAI's Whisper speech recognition model.
iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.
obs-zoom-and-follow - Dynamic zoom and mouse tracking script for OBS Studio
WhisperFusion - WhisperFusion builds upon the capabilities of WhisperLive and WhisperSpeech to provide a seamless conversations with an AI.
gpt_chatbot - This chatbot lets you use your microphone to communicate with GPT-4. It uses the OpenAI text to speech to respond with a voice. It uses Pinecone to store long term information and retrieves it to create context. API keys for OpenAI and Pinecone required. Tested on Windows
whisper_streaming - Whisper realtime streaming for long speech-to-text transcription and translation
gpt-voice-conversation-chatbot - Allows you to have an engaging and safely emotive spoken / CLI conversation with the AI ChatGPT / GPT-4 while giving you the option to let it remember things discussed.
PaddleSpeech - Easy-to-use Speech Toolkit including Self-Supervised Learning model, SOTA/Streaming ASR with punctuation, Streaming TTS with text frontend, Speaker Verification System, End-to-End Speech Translation and Keyword Spotting. Won NAACL2022 Best Demo Award.
LiveWhisper - A nearly-live implementation of OpenAI's Whisper, using sounddevice. Requires existing Whisper install.