serve VS openai-whisper-cpu

Compare serve vs openai-whisper-cpu and see what are their differences.

openai-whisper-cpu

Improving transcription performance of OpenAI Whisper for CPU based deployment (by MiscellaneousStuff)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
serve openai-whisper-cpu
11 5
3,949 206
1.7% -
9.6 10.0
6 days ago over 1 year ago
Java Jupyter Notebook
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

serve

Posts with mentions or reviews of serve. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-08-15.

openai-whisper-cpu

Posts with mentions or reviews of openai-whisper-cpu. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-14.
  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    I feel the same.

    For example some stats from Whisper [0] (audio transcoding) show the following for the medium model (see other models in the link):

    ---

    GPU medium fp32 Linear 1.7s

    CPU medium fp32 nn.Linear 60.7

    CPU medium qint8 (quant) nn.Linear 23.1

    ---

    So the same model runs 35.7 times faster on GPU, and compared to an CPU-optimized model still 13.6.

    I was expecting around an order or magnitude of improvement. Then again, I do not know if in the case of this article the entire model was in the GPU, or just a fraction of it (22 layers), which might explain the result.

    [0] https://github.com/MiscellaneousStuff/openai-whisper-cpu

  • Whispers AI Modular Future
    14 projects | news.ycombinator.com | 20 Feb 2023
    According to https://github.com/MiscellaneousStuff/openai-whisper-cpu the medium model needs 1.7 seconds to transcribe 30 seconds of audio when run on a GPU.
  • [P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
    4 projects | /r/MachineLearning | 6 Nov 2022
    There is a very simple method built-in to PyTorch which can give you over 3x speed improvement for the large model, which you could also combine with the method proposed in this post. https://github.com/MiscellaneousStuff/openai-whisper-cpu
  • [D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
    8 projects | /r/MachineLearning | 28 Oct 2022
    For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1.5B params). Refer to the below table for performance increases:
  • [P] OpenAI Whisper - 3x CPU Inference Speedup
    1 project | /r/MachineLearning | 27 Oct 2022
    GitHub

What are some alternatives?

When comparing serve and openai-whisper-cpu you can also consider the following projects:

server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.

llama-cpp-python - Python bindings for llama.cpp

serving - A flexible, high-performance serving system for machine learning models

intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

JavaScriptClassifier - [Moved to: https://github.com/JonathanSum/JavaScriptClassifier]

whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

pinferencia - Python + Inference - Model Deployment library in Python. Simplest model inference server ever.

FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.

kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.

buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.

deepsparse - Sparsity-aware deep learning inference runtime for CPUs