intel-extension-for-pytorch VS openai-whisper-cpu

Compare intel-extension-for-pytorch vs openai-whisper-cpu and see what are their differences.

intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform (by intel)

openai-whisper-cpu

Improving transcription performance of OpenAI Whisper for CPU based deployment (by MiscellaneousStuff)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
intel-extension-for-pytorch openai-whisper-cpu
14 5
1,342 221
9.6% -
9.7 10.0
3 days ago over 1 year ago
Python Jupyter Notebook
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

intel-extension-for-pytorch

Posts with mentions or reviews of intel-extension-for-pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-20.
  • Efficient LLM inference solution on Intel GPU
    3 projects | news.ycombinator.com | 20 Jan 2024
    OK I found it. Looks like they use SYCL (which for some reason they've rebranded to DPC++): https://github.com/intel/intel-extension-for-pytorch/tree/v2...
  • Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
    13 projects | news.ycombinator.com | 14 Dec 2023
    Just to point out it does, kind of: https://github.com/intel/intel-extension-for-pytorch

    I've asked before if they'll merge it back into PyTorch main and include it in the CI, not sure if they've done that yet.

  • Watch out AMD: Intel Arc A580 could be the next great affordable GPU
    2 projects | news.ycombinator.com | 6 Aug 2023
    Intel already has a working GPGPU stack, using oneAPI/SYCL.

    They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.

  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    https://github.com/intel/intel-extension-for-pytorch :

    > Intel® Extension for PyTorch extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*

    https://pytorch.org/blog/celebrate-pytorch-2.0/ :

    > As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.

    The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP-based thread parallelization*

    DLRS Deep Learning Reference Stack: https://intel.github.io/stacks/dlrs/index.html

  • Train Lora's on Arc GPUs?
    2 projects | /r/IntelArc | 14 Apr 2023
    Install intel extensions for pytorch using docker. https://github.com/intel/intel-extension-for-pytorch
  • Does it make sense to buy intel arc A770 16gb or AMD RX 7900 XT for machine learning?
    2 projects | /r/IntelArc | 7 Apr 2023
  • PyTorch Intel HD Graphics 4600 card compatibility?
    1 project | /r/pytorch | 4 Apr 2023
    There is: https://github.com/intel/intel-extension-for-pytorch for intel cards on GPUs, but I would assume this doesn't extend to integraded graphics
  • Stable Diffusion Web UI for Intel Arc
    7 projects | /r/IntelArc | 24 Feb 2023
    Nonetheless, this issue might be relevant for your case.
  • Does anyone uses Intel Arc A770 GPU for machine learning? [D]
    5 projects | /r/MachineLearning | 30 Nov 2022
  • Will ROCm finally get some love?
    3 projects | /r/Amd | 16 Nov 2022
    I'm not sure where the disdain for ROCm is coming from, but tensorflow-rocm and the rocm pytorch container were fairly easy to setup and use from scratch once I got the correct Linux kernel installed along with the rest of the necessary ROCm components needed to use tensorflow and pytorch for rocm. TBF Intel Extension for Tensorflow wasn't too bad to setup either (except for the lack of float16 mixed precision training support, that was definitely a pain point to not be able to have), but Intel Extension for Pytorch for Intel GPUs (a.k.a. IPEX-GPU) however, has been a PITA to use for my i5 11400H iGPU NOT because the iGPU itself is slow, BUT because the current i915 driver in the mainline linux kernel simply doesn't work with IPEX-GPU (every script that I've ran ends up freezing when using even the i915 drivers as recent as Kernel version 6), and when I ended up installing drivers that were meant for the Arc GPUs that finally got IPEX-GPUs to work, I ended up with even more issues such as sh*tty FP64 emulation support that basically meant I had to do some really janky workarounds for things to not break while FP64 emulation was enabled (disabling was simply not an option for me, long story short). And yea unlike Intel, both Nvidia AND AMD actually do support FP64 instructions AND FLOAT16 mixed precision training natively on their GPUs so that one doesn't have to worry about running into "unsupported FP64 instructions" and "unsupported training modes" no matter what software they're running on those GPUs.

openai-whisper-cpu

Posts with mentions or reviews of openai-whisper-cpu. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-14.
  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    I feel the same.

    For example some stats from Whisper [0] (audio transcoding) show the following for the medium model (see other models in the link):

    ---

    GPU medium fp32 Linear 1.7s

    CPU medium fp32 nn.Linear 60.7

    CPU medium qint8 (quant) nn.Linear 23.1

    ---

    So the same model runs 35.7 times faster on GPU, and compared to an CPU-optimized model still 13.6.

    I was expecting around an order or magnitude of improvement. Then again, I do not know if in the case of this article the entire model was in the GPU, or just a fraction of it (22 layers), which might explain the result.

    [0] https://github.com/MiscellaneousStuff/openai-whisper-cpu

  • Whispers AI Modular Future
    14 projects | news.ycombinator.com | 20 Feb 2023
    According to https://github.com/MiscellaneousStuff/openai-whisper-cpu the medium model needs 1.7 seconds to transcribe 30 seconds of audio when run on a GPU.
  • [P] Transcribe any podcast episode in just 1 minute with optimized OpenAI/whisper
    4 projects | /r/MachineLearning | 6 Nov 2022
    There is a very simple method built-in to PyTorch which can give you over 3x speed improvement for the large model, which you could also combine with the method proposed in this post. https://github.com/MiscellaneousStuff/openai-whisper-cpu
  • [D] How to get the fastest PyTorch inference and what is the "best" model serving framework?
    8 projects | /r/MachineLearning | 28 Oct 2022
    For CPU inference, model quantization is a very easy to apply method with great average speedups which is already built-in to PyTorch. For example, I applied dynamic quantization to the OpenAI Whisper model (speech recognition) across a range of model sizes (ranging from tiny which had 39M params to large which had 1.5B params). Refer to the below table for performance increases:
  • [P] OpenAI Whisper - 3x CPU Inference Speedup
    1 project | /r/MachineLearning | 27 Oct 2022
    GitHub

What are some alternatives?

When comparing intel-extension-for-pytorch and openai-whisper-cpu you can also consider the following projects:

llama-cpp-python - Python bindings for llama.cpp

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

whisperX - WhisperX: Automatic Speech Recognition with Word-level Timestamps (& Diarization)

ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

FlexGen - Running large language models on a single GPU for throughput-oriented scenarios.

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

buzz - Buzz transcribes and translates audio offline on your personal computer. Powered by OpenAI's Whisper.

rocm-examples

kernl - Kernl lets you run PyTorch transformer models several times faster on GPU with a single line of code, and is designed to be easily hackable.

stable-diffusion-webui-ipex-arc - A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui

BentoML - The most flexible way to serve AI/ML models in production - Build Model Inference Service, LLM APIs, Inference Graph/Pipelines, Compound AI systems, Multi-Modal, RAG as a Service, and more!