intel-extension-for-pytorch VS ROCm

Compare intel-extension-for-pytorch vs ROCm and see what are their differences.

intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform (by intel)

ROCm

AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm] (by RadeonOpenCompute)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
intel-extension-for-pytorch ROCm
14 198
1,342 3,637
9.6% -
9.7 0.0
3 days ago 5 months ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

intel-extension-for-pytorch

Posts with mentions or reviews of intel-extension-for-pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-20.
  • Efficient LLM inference solution on Intel GPU
    3 projects | news.ycombinator.com | 20 Jan 2024
    OK I found it. Looks like they use SYCL (which for some reason they've rebranded to DPC++): https://github.com/intel/intel-extension-for-pytorch/tree/v2...
  • Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
    13 projects | news.ycombinator.com | 14 Dec 2023
    Just to point out it does, kind of: https://github.com/intel/intel-extension-for-pytorch

    I've asked before if they'll merge it back into PyTorch main and include it in the CI, not sure if they've done that yet.

  • Watch out AMD: Intel Arc A580 could be the next great affordable GPU
    2 projects | news.ycombinator.com | 6 Aug 2023
    Intel already has a working GPGPU stack, using oneAPI/SYCL.

    They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.

  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    https://github.com/intel/intel-extension-for-pytorch :

    > Intel® Extension for PyTorch extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*

    https://pytorch.org/blog/celebrate-pytorch-2.0/ :

    > As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.

    The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP-based thread parallelization*

    DLRS Deep Learning Reference Stack: https://intel.github.io/stacks/dlrs/index.html

  • Train Lora's on Arc GPUs?
    2 projects | /r/IntelArc | 14 Apr 2023
    Install intel extensions for pytorch using docker. https://github.com/intel/intel-extension-for-pytorch
  • Does it make sense to buy intel arc A770 16gb or AMD RX 7900 XT for machine learning?
    2 projects | /r/IntelArc | 7 Apr 2023
  • PyTorch Intel HD Graphics 4600 card compatibility?
    1 project | /r/pytorch | 4 Apr 2023
    There is: https://github.com/intel/intel-extension-for-pytorch for intel cards on GPUs, but I would assume this doesn't extend to integraded graphics
  • Stable Diffusion Web UI for Intel Arc
    7 projects | /r/IntelArc | 24 Feb 2023
    Nonetheless, this issue might be relevant for your case.
  • Does anyone uses Intel Arc A770 GPU for machine learning? [D]
    5 projects | /r/MachineLearning | 30 Nov 2022
  • Will ROCm finally get some love?
    3 projects | /r/Amd | 16 Nov 2022
    I'm not sure where the disdain for ROCm is coming from, but tensorflow-rocm and the rocm pytorch container were fairly easy to setup and use from scratch once I got the correct Linux kernel installed along with the rest of the necessary ROCm components needed to use tensorflow and pytorch for rocm. TBF Intel Extension for Tensorflow wasn't too bad to setup either (except for the lack of float16 mixed precision training support, that was definitely a pain point to not be able to have), but Intel Extension for Pytorch for Intel GPUs (a.k.a. IPEX-GPU) however, has been a PITA to use for my i5 11400H iGPU NOT because the iGPU itself is slow, BUT because the current i915 driver in the mainline linux kernel simply doesn't work with IPEX-GPU (every script that I've ran ends up freezing when using even the i915 drivers as recent as Kernel version 6), and when I ended up installing drivers that were meant for the Arc GPUs that finally got IPEX-GPUs to work, I ended up with even more issues such as sh*tty FP64 emulation support that basically meant I had to do some really janky workarounds for things to not break while FP64 emulation was enabled (disabling was simply not an option for me, long story short). And yea unlike Intel, both Nvidia AND AMD actually do support FP64 instructions AND FLOAT16 mixed precision training natively on their GPUs so that one doesn't have to worry about running into "unsupported FP64 instructions" and "unsupported training modes" no matter what software they're running on those GPUs.

ROCm

Posts with mentions or reviews of ROCm. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-10-06.
  • AMD May Get Across the CUDA Moat
    8 projects | news.ycombinator.com | 6 Oct 2023
    Yep, did exactly that. IMO he threw a fit, even though AMD was working with him squashing bugs. https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...
  • ROCm 5.7.0 Release
    1 project | /r/ROCm | 26 Sep 2023
  • ROCm Is AMD's #1 Priority, Executive Says
    5 projects | news.ycombinator.com | 26 Sep 2023
    Ok, I wonder what's wrong. maybe it's this? https://stackoverflow.com/questions/4959621/error-1001-in-cl...

    Nope. Anything about this on the arch wiki? Nope

    This bug report[2] from 2021? Maybe I need to update my groups.

    [2]: https://github.com/RadeonOpenCompute/ROCm/issues/1411

        $ ls -la /dev/kfd
  • Simplifying GPU Application Development with HMM
    2 projects | news.ycombinator.com | 29 Aug 2023
    HMM is, I believe, a Linux feature.

    AMD added HMM support in ROCm 5.0 according to this: https://github.com/RadeonOpenCompute/ROCm/blob/develop/CHANG...

  • AMD Ryzen APU turned into a 16GB VRAM GPU and it can run Stable Diffusion
    3 projects | news.ycombinator.com | 17 Aug 2023
    Woot AMD now supports APU? I sold my notebook as i hit a wall when trying rocm [1] Is there a list oft Wirkung apu's ?

    [1] https://github.com/RadeonOpenCompute/ROCm/issues/1587

  • Nvidia's CUDA Monopoly
    3 projects | news.ycombinator.com | 7 Aug 2023
    Last I heard he's abandoned working with AMD products.

    https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

  • Nvidia H100 GPUs: Supply and Demand
    2 projects | news.ycombinator.com | 1 Aug 2023
    They're talking about the meltdown he had on stream [1] (in front of the mentioned pirate flag), that ended with him saying he'd stop using AMD hardware [2]. He recanted this two weeks after talking with AMD [3].

    Maybe he'll succeed, but this definitely doesn't scream stability to me. I'd be wary of investing money into his ventures (but then I'm not a VC, so what do I know).

    [1] https://www.youtube.com/watch?v=Mr0rWJhv9jU

    [2] https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

    [3] https://twitter.com/realGeorgeHotz/status/166980346408248934...

  • Open or closed source Nvidia driver?
    1 project | /r/linux | 9 Jul 2023
    As for rocm support on consumer devices, AMD wont even clarify what devices are supported. https://github.com/RadeonOpenCompute/ROCm/pull/1738
  • Why Nvidia Keeps Winning: The Rise of an AI Giant
    3 projects | news.ycombinator.com | 6 Jul 2023
    He flamed out, then is back after Lisa Su called him (lmao)

    https://geohot.github.io/blog/jekyll/update/2023/05/24/the-t...

    https://www.youtube.com/watch?v=Mr0rWJhv9jU

    https://github.com/RadeonOpenCompute/ROCm/issues/2198#issuec...

    https://geohot.github.io/blog/jekyll/update/2023/06/07/a-div...

    On a personal level that youtube doesn't make him come off looking that good... like people are trying to get patches to him and generally soothe him/damage control and he's just being a bit of a manchild. And it sounds like that's the general course of events around a lot of his "efforts".

    On the other hand he's not wrong either, having this private build inside AMD and not even validating official, supported configurations for the officially supported non-private builds they show to the world isn't a good look, and that's just the very start of the problems around ROCm. AMD's OpenCL runtime was never stable or good either and every experience I've heard with it was "we spent so much time fighting AMD-specific runtime bugs and specs jank that what we ended up with was essentially vendor-proprietary anyway".

    On the other other hand, it sounds like AMD know this is a mess and has some big stability/maturity improvements in the pipeline. It seems clear from some of the smoke coming out of the building that they're cooking on more general ROCm support for RDNA cards, and generally working to patch the maturity and stability issues he's talking about. I hate the "wait for drivers/new software release bro it's gonna fix everything" that surrounds AMD products but in this case I'm at least hopeful they seem to understand the problem, even if it's completely absurdly late.

    Some of what he was viewing as "the process happening in secret" was likely people doing rush patches on the latest build to accommodate him, and he comes off as berating them over it. Again, like, that stream just comes off as "mercurial manchild" not coding genius. And everyone knew the driver situation is bad, that's why there's notionally alpha for him to realize here in the first place. He's bumping into moneymakers, and getting mad about it.

  • Disable "SetTensor/CopyTensor" console logging.
    2 projects | /r/ROCm | 6 Jul 2023
    I tried to train another model using InceptionResNetV2 and the same issues happens. Also, this happens even using the model.predict() method if using the GPU. Probably this is an issue related to the AMD Radeon RX 6700 XT or some mine misconfiguration. System Inormation: ArchLinux 6.1.32-1-lts - AMD Radeon RX 6700 XT - gfx1031 Opened issues: - https://github.com/RadeonOpenCompute/ROCm/issues/2250 - https://github.com/ROCmSoftwarePlatform/tensorflow-upstream/issues/2125

What are some alternatives?

When comparing intel-extension-for-pytorch and ROCm you can also consider the following projects:

llama-cpp-python - Python bindings for llama.cpp

tensorflow-directml - Fork of TensorFlow accelerated by DirectML

openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

rocm-arch - A collection of Arch Linux PKGBUILDS for the ROCm platform

bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.

oneAPI.jl - Julia support for the oneAPI programming toolkit.

rocm-examples

SHARK - SHARK - High Performance Machine Learning Distribution

stable-diffusion-webui-ipex-arc - A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui

llama.cpp - LLM inference in C/C++