torchdynamo
A Python-level JIT compiler designed to make unmodified PyTorch programs faster. (by pytorch)
openai-whisper-cpu
Improving transcription performance of OpenAI Whisper for CPU based deployment (by MiscellaneousStuff)
torchdynamo | openai-whisper-cpu | |
---|---|---|
1 | 5 | |
964 | 221 | |
1.2% | - | |
3.5 | 10.0 | |
16 days ago | over 1 year ago | |
Python | Jupyter Notebook | |
BSD 3-clause "New" or "Revised" License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
torchdynamo
Posts with mentions or reviews of torchdynamo.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-10-28.
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 1), what is the easiest way to speed up inference (assume only PyTorch and primarily GPU but also some CPU)? I have been using ONNX and Torchscript but there is a bit of a learning curve and sometimes it can be tricky to get the model to actually work. Is there anything else worth trying? I am enthused by things like TorchDynamo (although I have not tested it extensively) due to its apparent ease of use. I also saw the post yesterday about Kernl using (OpenAI) Triton kernels to speed up transformer models which also looks interesting. Are things like SageMaker Neo or NeuralMagic worth trying? My only reservation with some of these is they still seem to be pretty model/architecture specific. I am a little reluctant to put much time into these unless I know others have had some success first.
openai-whisper-cpu
Posts with mentions or reviews of openai-whisper-cpu.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2023-05-14.
-
How to run Llama 13B with a 6GB graphics card
I feel the same.
For example some stats from Whisper [0] (audio transcoding) show the following for the medium model (see other models in the link):
---
GPU medium fp32 Linear 1.7s
CPU medium fp32 nn.Linear 60.7
CPU medium qint8 (quant) nn.Linear 23.1
---
So the same model runs 35.7 times faster on GPU, and compared to an CPU-optimized model still 13.6.
I was expecting around an order or magnitude of improvement. Then again, I do not know if in the case of this article the entire model was in the GPU, or just a fraction of it (22 layers), which might explain the result.
[0] https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
Whispers AI Modular Future
According to https://github.com/MiscellaneousStuff/openai-whisper-cpu the medium model needs 1.7 seconds to transcribe 30 seconds of audio when run on a GPU.
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
There is a very simple method built-in to PyTorch which can give you over 3x speed improvement for the large model, which you could also combine with the method proposed in this post. https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1.5B params). Refer to the below table for performance increases:
-
[P] OpenAI Whisper - 3x CPU Inference Speedup
GitHub