openai-whisper-cpu VS modal-examples

Compare openai-whisper-cpu vs modal-examples and see what are their differences.

openai-whisper-cpu

Improving transcription performance of OpenAI Whisper for CPU based deployment (by MiscellaneousStuff)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
openai-whisper-cpu modal-examples
5 9
221 560
- 3.6%
10.0 9.5
over 1 year ago 2 days ago
Jupyter Notebook Python
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

openai-whisper-cpu

Posts with mentions or reviews of openai-whisper-cpu. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-14.
  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    I feel the same.

    For example some stats from Whisper [0] (audio transcoding) show the following for the medium model (see other models in the link):

    ---

    GPU medium fp32 Linear 1.7s

    CPU medium fp32 nn.Linear 60.7

    CPU medium qint8 (quant) nn.Linear 23.1

    ---

    So the same model runs 35.7 times faster on GPU, and compared to an CPU-optimized model still 13.6.

    I was expecting around an order or magnitude of improvement. Then again, I do not know if in the case of this article the entire model was in the GPU, or just a fraction of it (22 layers), which might explain the result.

    [0] https://github.com/MiscellaneousStuff/openai-whisper-cpu

  • Whispers AI Modular Future
    14 projects | news.ycombinator.com | 20 Feb 2023
    According to https://github.com/MiscellaneousStuff/openai-whisper-cpu the medium model needs 1.7 seconds to transcribe 30 seconds of audio when run on a GPU.
  • [P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
    4 projects | /r/MachineLearning | 6 Nov 2022
    There is a very simple method built-in to PyTorch which can give you over 3x speed improvement for the large model, which you could also combine with the method proposed in this post. https://github.com/MiscellaneousStuff/openai-whisper-cpu
  • [D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
    8 projects | /r/MachineLearning | 28 Oct 2022
    For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1.5B params). Refer to the below table for performance increases:
  • [P] OpenAI Whisper - 3x CPU Inference Speedup
    1 project | /r/MachineLearning | 27 Oct 2022
    GitHub

modal-examples

Posts with mentions or reviews of modal-examples. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-16.

What are some alternatives?

When comparing openai-whisper-cpu and modal-examples you can also consider the following projects:

llama-cpp-python - Python bindings for llama.cpp

text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.

intel-extension-for-pytorch - A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.

whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

WAAS - Whisper as a Service (GUI and API with queuing for OpenAI Whisper)

EasyLM - Large language models (LLMs) made easy, EasyLM is a one stop solution for pre-training, finetuning, evaluating and serving LLMs in JAX/Flax.

buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.

mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.

kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.

brev-cli - Connect your laptop to cloud computers. Follow to stay updated about our product