openai-whisper-cpu
modal-examples
openai-whisper-cpu | modal-examples | |
---|---|---|
5 | 9 | |
221 | 560 | |
- | 3.6% | |
10.0 | 9.5 | |
over 1 year ago | 2 days ago | |
Jupyter Notebook | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openai-whisper-cpu
-
How to run Llama 13B with a 6GB graphics card
I feel the same.
For example some stats from Whisper [0] (audio transcoding) show the following for the medium model (see other models in the link):
---
GPU medium fp32 Linear 1.7s
CPU medium fp32 nn.Linear 60.7
CPU medium qint8 (quant) nn.Linear 23.1
---
So the same model runs 35.7 times faster on GPU, and compared to an CPU-optimized model still 13.6.
I was expecting around an order or magnitude of improvement. Then again, I do not know if in the case of this article the entire model was in the GPU, or just a fraction of it (22 layers), which might explain the result.
[0] https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
Whispers AI Modular Future
According to https://github.com/MiscellaneousStuff/openai-whisper-cpu the medium model needs 1.7 seconds to transcribe 30 seconds of audio when run on a GPU.
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
There is a very simple method built-in to PyTorch which can give you over 3x speed improvement for the large model, which you could also combine with the method proposed in this post. https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1.5B params). Refer to the below table for performance increases:
-
[P] OpenAI Whisper - 3x CPU Inference Speedup
GitHub
modal-examples
-
Show HN: Real-time image autocomplete in <100 lines of code with SDXL Lightning
We made a small app for SDXL Lightning, running your own Python code on GPUs. It generates images in real time.
https://potatoes.ai/
We know there was a fal.ai post yesterday, and that got a lot of interest, but we also made this demo yesterday and didn't share — just wanted to mention it as an alternative option for people who like running their own code and custom models instead of using a prebuilt API provider.
The backend code is open-source too and you can deploy it yourself: https://github.com/modal-labs/modal-examples/blob/main/06_gpu_and_ml/stable_diffusion/stable_diffusion_xl_lightning.py
-
Our startup has docs issues and it is costing us prospects. What things can you share to help us?
The startup I work at is relatively pretty good at documentation engineering. We have written code to test the code snippets in docstrings (https://github.com/modal-labs/pytest-markdown-docs) and we have written code to do synthetic monitoring testing of the examples in our examples repo (https://github.com/modal-labs/modal-examples). We are also diligent about putting using Python's warnings library to handle API deprecation, and treat deprecation warnings as errors internally, ensuring our own code samples and examples are most up-to-date.
-
OpenLLaMA: An Open Reproduction of LLaMA
You can get it running with one Python script on Modal.com :)
https://github.com/modal-labs/modal-examples/blob/main/06_gp...
-
Whispers AI Modular Future
This demo lets you choose the podcast, and is open-source: https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
https://github.com/modal-labs/modal-examples/tree/main/06_gp...
Transcribes 1hr of audio in roughly 1min, using parallelisation across CPUs.
-
Show HN: PodText.ai – Search anything said on a podcast, Highlight text to play
This demo is open-source: https://github.com/modal-labs/modal-examples/tree/main/06_gp....
https://modal-labs--whisper-pod-transcriber-fastapi-app.moda...
-
Show HN: Stable Diffusion Pokémon Cards
It's become so easy to stick together ML models, often without training most or all of them yourself.
*video demo:* https://youtu.be/mQsMuM8d4Qc
*cloud platform:* https://modal.com
*code*: https://github.com/modal-labs/modal-examples/tree/main/06_gp...
-
How can machine learning help us learn languages better?
Transcription - OpenAI just released Whisper. Check out what it can do with podcasts
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
Here's the source code.
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
WAAS - Whisper as a Service (GUI and API with queuing for OpenAI Whisper)
EasyLM - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
brev-cli - Connect your laptop to cloud computers. Follow to stay updated about our product