WhisperLive
mlx-examples
WhisperLive | mlx-examples | |
---|---|---|
4 | 31 | |
1,287 | 5,194 | |
19.2% | 10.6% | |
9.4 | 9.7 | |
18 days ago | 1 day ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
WhisperLive
-
Show HN: WhisperFusion β Ultra-low latency conversations with an AI chatbot
Everything runs locally, we use:
- WhisperLive for the transcription - https://github.com/collabora/WhisperLive
-
WhisperSpeech β An Open Source text-to-speech system built by inverting Whisper
Check out WhisperLive: https://github.com/collabora/WhisperLive
If you're grappling with the slow march from cool tech demos to real-world language model apps, you might wanna check out WhisperLive. It's this rad open-source project thatβs all about leveraging Whisper models for slick live transcription. Think real-time, on-the-fly translated captions for those global meetups. It's a neat example of practical, user-focused tech in action. Dive into the details on their GitHub page
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
https://github.com/collabora/WhisperLive
The is another one that uses huggingface's implementation, but I haven't tried it since my spec doesn't support flash-att2
-
Triple Threat: The Power of Transcription, Summary, and Translation
Curious to see how this works? Check out our demo page - https://col.la/transcription to generate your own transcription, summary, and translation, or use our browser extension - https://github.com/collabora/WhisperLive to get live transcriptions.
mlx-examples
- MLX-Whisper
- FLaNK AI Weekly for 29 April 2024
- DBRX on Apple MLX
- Why the M2 is more advanced that it seemed
- MLX: Speculative Decoding
- Mixtral on MLX
- Qwen on MLX
- FLaNK Weekly 18 Dec 2023
- MLX: Fine-tune Llama 7B or Mistral 7B with 32GB
-
Whisper: Nvidia RTX 4090 vs. M1 Pro with MLX
I was able to get it running on MLX on my M2 Max machine within a couple minutes using their example: https://github.com/ml-explore/mlx-examples/tree/main/whisper
What are some alternatives?
cog-whisper-diarization - Cog implementation of transcribing + diarization pipeline with Whisper & Pyannote
llama-cpp-python - Python bindings for llama.cpp
whisper-writer - π¬π A small dictation app using OpenAI's Whisper speech recognition model.
obs-zoom-and-follow - Dynamic zoom and mouse tracking script for OBS Studio
FLaNK-OpenAi - Chat
gpt_chatbot - This chatbot lets you use your microphone to communicate with GPT-4. It uses the OpenAI text to speech to respond with a voice. It uses Pinecone to store long term information and retrieves it to create context. API keys for OpenAI and Pinecone required. Tested on Windows
MemGPT - Create LLM agents with long-term memory and custom tools ππ¦
whisper_streaming - Whisper realtime streaming for long speech-to-text transcription and translation
furnace - a multi-system chiptune tracker compatible with DefleMask modules
gpt-voice-conversation-chatbot - Allows you to have an engaging and safely emotive spoken / CLI conversation with the AI ChatGPT / GPT-4 while giving you the option to let it remember things discussed.
FLaNK-ContinuousSQL