openai-whisper-cpu
WAAS
openai-whisper-cpu | WAAS | |
---|---|---|
5 | 12 | |
221 | 1,738 | |
- | 2.3% | |
10.0 | 7.0 | |
over 1 year ago | 3 days ago | |
Jupyter Notebook | JavaScript | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openai-whisper-cpu
-
How to run Llama 13B with a 6GB graphics card
I feel the same.
For example some stats from Whisper [0] (audio transcoding) show the following for the medium model (see other models in the link):
---
GPU medium fp32 Linear 1.7s
CPU medium fp32 nn.Linear 60.7
CPU medium qint8 (quant) nn.Linear 23.1
---
So the same model runs 35.7 times faster on GPU, and compared to an CPU-optimized model still 13.6.
I was expecting around an order or magnitude of improvement. Then again, I do not know if in the case of this article the entire model was in the GPU, or just a fraction of it (22 layers), which might explain the result.
[0] https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
Whispers AI Modular Future
According to https://github.com/MiscellaneousStuff/openai-whisper-cpu the medium model needs 1.7 seconds to transcribe 30 seconds of audio when run on a GPU.
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
There is a very simple method built-in to PyTorch which can give you over 3x speed improvement for the large model, which you could also combine with the method proposed in this post. https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1.5B params). Refer to the below table for performance increases:
-
[P] OpenAI Whisper - 3x CPU Inference Speedup
GitHub
WAAS
-
Show HN: Minutes – Save up to 20% of salespeople time
This app does it locally on newer macs for free.
https://apps.apple.com/no/app/jojo-transcribe/id1659864300?m...
And open source: https://github.com/schibsted/WAAS
-
Only I don't use docker. WAAS. What alternatives to a docker install are there?
Clone the repo, then run the steps in the dockerfile: https://github.com/schibsted/WAAS/blob/main/Dockerfile
-
Whispers AI Modular Future
What utilities related to Whisper do you wish existed? What have you had to build yourself?
On the end user application side, I wish there was something that let me pick a podcast of my choosing, get it fully transcribed, and get an embeddings search plus answer q&a on top of that podcast or set of chosen podcasts. I've seen ones for specific podcasts, but I'd like one where I can choose the podcast. (Probably won't build it)
Also on the end user side, I wish there was an Otter alternative (still paid $30/mo, but unlimited minutes per month) that had longer transcription limits. (Started building this, not much interest from users though)
Things I've seen on the dev tool side:
Gladia (API call version of Whisper)
Whisper.cpp
Whisper webservice (https://github.com/ahmetoner/whisper-asr-webservice) - via this thread
Live microphone demo (not real time, it still does it in chunks) https://github.com/mallorbc/whisper_mic
Streamlit UI https://github.com/hayabhay/whisper-ui
Whisper playground https://github.com/saharmor/whisper-playground
Real time whisper https://github.com/shirayu/whispering
Whisper as a service https://github.com/schibsted/WAAS
Improved timestamps and speaker identification https://github.com/m-bain/whisperX
MacWhisper https://goodsnooze.gumroad.com/l/macwhisper
Crossplatform desktop Whisper that supports semi-realtime https://github.com/chidiwilliams/buzz
- [task] Write out the questions and responses from my podcast videos so I can turn it into a written article for my site - $5 per video x 15 videos = $75 - $80
- Self-host Whisper As a Service with GUI and queueing. Schibsted created a transcription service for our journalists to transcribe audio interviews and podcasts really quick.
- Show HN: Self-host Whisper As a Service with GUI and queueing
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
whisper_mic - Project that allows one to use a microphone with OpenAI whisper.
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
whisper.cpp - Port of OpenAI's Whisper model in C/C++
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
whisper-playground - Build real time speech2text web apps using OpenAI's Whisper https://openai.com/blog/whisper/
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
frogbase - Transform audio-visual content into navigable knowledge.
kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
whisper-asr-webservice - OpenAI Whisper ASR Webservice API