openai-whisper-cpu
coriander
openai-whisper-cpu | coriander | |
---|---|---|
5 | 3 | |
221 | 832 | |
- | - | |
10.0 | 0.0 | |
over 1 year ago | 3 months ago | |
Jupyter Notebook | LLVM | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openai-whisper-cpu
-
How to run Llama 13B with a 6GB graphics card
I feel the same.
For example some stats from Whisper [0] (audio transcoding) show the following for the medium model (see other models in the link):
---
GPU medium fp32 Linear 1.7s
CPU medium fp32 nn.Linear 60.7
CPU medium qint8 (quant) nn.Linear 23.1
---
So the same model runs 35.7 times faster on GPU, and compared to an CPU-optimized model still 13.6.
I was expecting around an order or magnitude of improvement. Then again, I do not know if in the case of this article the entire model was in the GPU, or just a fraction of it (22 layers), which might explain the result.
[0] https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
Whispers AI Modular Future
According to https://github.com/MiscellaneousStuff/openai-whisper-cpu the medium model needs 1.7 seconds to transcribe 30 seconds of audio when run on a GPU.
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
There is a very simple method built-in to PyTorch which can give you over 3x speed improvement for the large model, which you could also combine with the method proposed in this post. https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1.5B params). Refer to the below table for performance increases:
-
[P] OpenAI Whisper - 3x CPU Inference Speedup
GitHub
coriander
- How to run Llama 13B with a 6GB graphics card
-
Is it possible to virtualize a CUDA processor?
It’s not a full implementation of CUDA and requires some contortions to use but https://github.com/hughperkins/coriander is as good as anything else I’ve tried. It has been a few years though.
-
EVGA will no longer make NVIDIA GPUs due to “disrespectful treatment” - Dexerto
It’s possible to run cuda on anything . There have been attempts to do this. https://github.com/hughperkins/coriander Unfortunately it seems development stalled.
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
gptq - Code for the ICLR 2023 paper "GPTQ: Accurate Post-training Quantization of Generative Pretrained Transformers".
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
RadeonClockEnforcer - AHK script that forces maximum clocks while important applications are open. Automates OverdriveNTool's clock/voltage switching functionality for GPU and VRAM, with the purpose of enforcing maximum clocks while whitelisted applications are in focus.
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
HIPIFY - HIPIFY: Convert CUDA to Portable C++ Code [Moved to: https://github.com/ROCm/HIPIFY]
kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
sparsegpt - Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".