openai-whisper-cpu
transformer-deploy
openai-whisper-cpu | transformer-deploy | |
---|---|---|
5 | 8 | |
221 | 1,623 | |
- | 0.9% | |
10.0 | 6.8 | |
over 1 year ago | 7 months ago | |
Jupyter Notebook | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
openai-whisper-cpu
-
How to run Llama 13B with a 6GB graphics card
I feel the same.
For example some stats from Whisper [0] (audio transcoding) show the following for the medium model (see other models in the link):
---
GPU medium fp32 Linear 1.7s
CPU medium fp32 nn.Linear 60.7
CPU medium qint8 (quant) nn.Linear 23.1
---
So the same model runs 35.7 times faster on GPU, and compared to an CPU-optimized model still 13.6.
I was expecting around an order or magnitude of improvement. Then again, I do not know if in the case of this article the entire model was in the GPU, or just a fraction of it (22 layers), which might explain the result.
[0] https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
Whispers AI Modular Future
According to https://github.com/MiscellaneousStuff/openai-whisper-cpu the medium model needs 1.7 seconds to transcribe 30 seconds of audio when run on a GPU.
-
[P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
There is a very simple method built-in to PyTorch which can give you over 3x speed improvement for the large model, which you could also combine with the method proposed in this post. https://github.com/MiscellaneousStuff/openai-whisper-cpu
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1.5B params). Refer to the below table for performance increases:
-
[P] OpenAI Whisper - 3x CPU Inference Speedup
GitHub
transformer-deploy
-
[D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
For 2), I am aware of a few options. Triton inference server is an obvious one as is the ‘transformer-deploy’ version from LDS. My only reservation here is that they require the model compilation or are architecture specific. I am aware of others like Bento, Ray serving and TorchServe. Ideally I would have something that allows any (PyTorch model) to be used without the extra compilation effort (or at least optionally) and has some convenience things like ease of use, easy to deploy, easy to host multiple models and can perform some dynamic batching. Anyway, I am really interested to hear people's experience here as I know there are now quite a few options! Any help is appreciated! Disclaimer - I have no affiliation or are connected in any way with the libraries or companies listed here. These are just the ones I know of. Thanks in advance.
-
[P] Up to 12X faster GPU inference on Bert, T5 and other transformers with OpenAI Triton kernels
We work for Lefebvre Sarrut, a leading European legal publisher. Several of our products include transformer models in latency sensitive scenarios (search, content recommendation). So far, ONNX Runtime and TensorRT served us well, and we learned interesting patterns along the way that we shared with the community through an open-source library called transformer-deploy. However, recent changes in our environment made our needs evolve:
-
Convert Pegasus model to ONNX [Discussion]
here you will find a notebook for T5 on GPU with some tricks to make it fast: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/generative-model/t5.ipynb
-
[P] What we learned by benchmarking TorchDynamo (PyTorch team), ONNX Runtime and TensorRT on transformers model (inference)
Check the notebook https://github.com/ELS-RD/transformer-deploy/blob/main/demo/TorchDynamo/benchmark.ipynb for detailed results, but what we will keep in mind:
-
[P] What we learned by making T5-large 2X faster than Pytorch (and any autoregressive transformer)
notebook: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/generative-model/t5.ipynb (Onnx Runtime only)
-
[P] 4.5 times faster Hugging Face transformer inference by modifying some Python AST
Regarding CPU inference, quantization is very easy, and supported by Transformer-deploy , however performance on transformer are very low outside corner cases (like no batch, very short sequence and distilled model), and last Intel generation CPU based instance like C6 or M6 on AWS are quite expensive compared to a cheap GPU like Nvidia T4, to say it otherwise, on transformer, until you are ok with slow inference and takes a small instance (for a PoC for instance), CPU inference is probably not a good idea.
-
[P] First ever tuto to perform *GPU* quantization on 🤗 Hugging Face transformer models -> 2X faster inference
The end to end tutorial: https://github.com/ELS-RD/transformer-deploy/blob/main/demo/quantization_end_to_end.ipynb
-
[P] Python library to optimize Hugging Face transformer for inference: < 0.5 ms latency / 2850 infer/sec
Want to try it 👉 https://github.com/ELS-RD/transformer-deploy
What are some alternatives?
intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform
TensorRT - NVIDIA® TensorRT™ is an SDK for high-performance deep learning inference on NVIDIA GPUs. This repository contains the open source components of TensorRT.
llama-cpp-python - Python bindings for llama.cpp
FasterTransformer - Transformer related optimization, including BERT, GPT
whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)
torch2trt - An easy to use PyTorch to TensorRT converter
FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.
TensorRT - PyTorch/TorchScript/FX compiler for NVIDIA GPUs using TensorRT
buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.
OpenSeeFace - Robust realtime face and facial landmark tracking on CPU with Unity integration
kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.
mmrazor - OpenMMLab Model Compression Toolbox and Benchmark.