intel-extension-for-pytorch VS bitsandbytes

Compare intel-extension-for-pytorch vs bitsandbytes and see what are their differences.

intel-extension-for-pytorch

A Python package for extending the official PyTorch that can easily obtain performance on Intel platform (by intel)

bitsandbytes

Accessible large language models via k-bit quantization for PyTorch. (by TimDettmers)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
intel-extension-for-pytorch bitsandbytes
14 61
1,342 5,389
9.6% -
9.7 9.4
3 days ago 4 days ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

intel-extension-for-pytorch

Posts with mentions or reviews of intel-extension-for-pytorch. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-01-20.
  • Efficient LLM inference solution on Intel GPU
    3 projects | news.ycombinator.com | 20 Jan 2024
    OK I found it. Looks like they use SYCL (which for some reason they've rebranded to DPC++): https://github.com/intel/intel-extension-for-pytorch/tree/v2...
  • Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
    13 projects | news.ycombinator.com | 14 Dec 2023
    Just to point out it does, kind of: https://github.com/intel/intel-extension-for-pytorch

    I've asked before if they'll merge it back into PyTorch main and include it in the CI, not sure if they've done that yet.

  • Watch out AMD: Intel Arc A580 could be the next great affordable GPU
    2 projects | news.ycombinator.com | 6 Aug 2023
    Intel already has a working GPGPU stack, using oneAPI/SYCL.

    They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.

  • How to run Llama 13B with a 6GB graphics card
    12 projects | news.ycombinator.com | 14 May 2023
    https://github.com/intel/intel-extension-for-pytorch :

    > Intel® Extension for PyTorch extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*

    https://pytorch.org/blog/celebrate-pytorch-2.0/ :

    > As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.

    The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP-based thread parallelization*

    DLRS Deep Learning Reference Stack: https://intel.github.io/stacks/dlrs/index.html

  • Train Lora's on Arc GPUs?
    2 projects | /r/IntelArc | 14 Apr 2023
    Install intel extensions for pytorch using docker. https://github.com/intel/intel-extension-for-pytorch
  • Does it make sense to buy intel arc A770 16gb or AMD RX 7900 XT for machine learning?
    2 projects | /r/IntelArc | 7 Apr 2023
  • PyTorch Intel HD Graphics 4600 card compatibility?
    1 project | /r/pytorch | 4 Apr 2023
    There is: https://github.com/intel/intel-extension-for-pytorch for intel cards on GPUs, but I would assume this doesn't extend to integraded graphics
  • Stable Diffusion Web UI for Intel Arc
    7 projects | /r/IntelArc | 24 Feb 2023
    Nonetheless, this issue might be relevant for your case.
  • Does anyone uses Intel Arc A770 GPU for machine learning? [D]
    5 projects | /r/MachineLearning | 30 Nov 2022
  • Will ROCm finally get some love?
    3 projects | /r/Amd | 16 Nov 2022
    I'm not sure where the disdain for ROCm is coming from, but tensorflow-rocm and the rocm pytorch container were fairly easy to setup and use from scratch once I got the correct Linux kernel installed along with the rest of the necessary ROCm components needed to use tensorflow and pytorch for rocm. TBF Intel Extension for Tensorflow wasn't too bad to setup either (except for the lack of float16 mixed precision training support, that was definitely a pain point to not be able to have), but Intel Extension for Pytorch for Intel GPUs (a.k.a. IPEX-GPU) however, has been a PITA to use for my i5 11400H iGPU NOT because the iGPU itself is slow, BUT because the current i915 driver in the mainline linux kernel simply doesn't work with IPEX-GPU (every script that I've ran ends up freezing when using even the i915 drivers as recent as Kernel version 6), and when I ended up installing drivers that were meant for the Arc GPUs that finally got IPEX-GPUs to work, I ended up with even more issues such as sh*tty FP64 emulation support that basically meant I had to do some really janky workarounds for things to not break while FP64 emulation was enabled (disabling was simply not an option for me, long story short). And yea unlike Intel, both Nvidia AND AMD actually do support FP64 instructions AND FLOAT16 mixed precision training natively on their GPUs so that one doesn't have to worry about running into "unsupported FP64 instructions" and "unsupported training modes" no matter what software they're running on those GPUs.

bitsandbytes

Posts with mentions or reviews of bitsandbytes. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.
  • French AI startup Mistral secures €2B valuation
    2 projects | news.ycombinator.com | 9 Dec 2023
    No. Without the inference code, the best we can have are guesses on its implementation, so the benchmark figures we can get could be quite wrong. It does seem better than Llama2-70B in my tests, which rely on the work done by Dmytro Dzhulgakov[0] and DiscoResearch[1].

    But the point of releasing on bittorrent is to see the effervescence in hobbyist research and early attempts at MoE quantization, which are already ongoing[2]. They are benefitting from the community.

    [0]: https://github.com/dzhulgakov/llama-mistral

    [1]: https://huggingface.co/DiscoResearch/mixtral-7b-8expert

    [2]: https://github.com/TimDettmers/bitsandbytes/tree/sparse_moe

  • Lora training with Kohya issue
    2 projects | /r/StableDiffusion | 6 Dec 2023
    CUDA SETUP: To manually override the PyTorch CUDA version please see:https://github.com/TimDettmers/bitsandbytes/blob/main/how_to_use_nonpytorch_cuda.md
  • FLaNK Stack Weekly for 30 Oct 2023
    24 projects | dev.to | 30 Oct 2023
  • A comprehensive guide to running Llama 2 locally
    19 projects | news.ycombinator.com | 25 Jul 2023
    While on the surface, a 192GB Mac Studio seems like a great deal (it's not much more than a 48GB A6000!), there are several reasons why this might not be a good idea:

    * I assume most people have never used llama.cpp Metal w/ large models. It will drop to CPU speeds whenever the context window is full: https://github.com/ggerganov/llama.cpp/issues/1730#issuecomm... - while sure this might be fixed in the future, it's been an issue since Metal support was added, and is a significant problem if you are actually trying to actually use it for inferencing. With 192GB of memory, you could probably run larger models w/o quantization, but I've never seen anyone post benchmarks of their experiences. Note that at that point, the limited memory bandwidth will be a big factor.

    * If you are planning on using Apple Silicon for ML/training, I'd also be wary. There are multi-year long open bugs in PyTorch[1], and most major LLM libs like deepspeed, bitsandbytes, etc don't have Apple Silicon support[2][3].

    You can see similar patterns w/ Stable Diffusion support [4][5] - support lagging by months, lots of problems and poor performance with inference, much less fine tuning. You can apply this to basically any ML application you want (srt, tts, video, etc)

    Macs are fine to poke around with, but if you actually plan to do more than run a small LLM and say "neat", especially for a business, recommending a Mac for anyone getting started w/ ML workloads is a bad take. (In general, for anyone getting started, unless you're just burning budget, renting cloud GPU is going to be the best cost/perf, although on-prem/local obviously has other advantages.)

    [1] https://github.com/pytorch/pytorch/issues?q=is%3Aissue+is%3A...

    [2] https://github.com/microsoft/DeepSpeed/issues/1580

    [3] https://github.com/TimDettmers/bitsandbytes/issues/485

    [4] https://github.com/AUTOMATIC1111/stable-diffusion-webui/disc...

    [5] https://forums.macrumors.com/threads/ai-generated-art-stable...

  • Bit inference 4.2x faster than 16 bit
    1 project | news.ycombinator.com | 11 Jul 2023
    Release notes: https://github.com/TimDettmers/bitsandbytes/releases/tag/0.4...
  • Found duplicate ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0']
    1 project | /r/LocalLLaMA | 29 Jun 2023
    Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32 CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... ERROR: /usr/bin/python3: undefined symbol: cudaRuntimeGetVersion CUDA SETUP: libcudart.so path is None CUDA SETUP: Is seems that your cuda installation is not in your path. See https://github.com/TimDettmers/bitsandbytes/issues/85 for more information. CUDA SETUP: CUDA version lower than 11 are currently not supported for LLM.int8(). You will be only to use 8-bit optimizers and quantization routines!! CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 00 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cpu.so... /usr/local/lib/python3.10/dist-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: /usr/lib64-nvidia did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searching further paths... warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/sys/fs/cgroup/memory.events /var/colab/cgroup/jupyter-children/memory.events')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('http'), PosixPath('//172.28.0.1'), PosixPath('8013')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('//colab.research.google.com/tun/m/cc48301118ce562b961b3c22d803539adc1e0c19/gpu-t4-s-1b6gsytv7z9le --tunnel_background_save_delay=10s --tunnel_periodic_background_save_frequency=30m0s --enable_output_coalescing=true --output_coalescing_required=true'), PosixPath('--logtostderr --listen_host=172.28.0.12 --target_host=172.28.0.12 --tunnel_background_save_url=https')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('/env/python')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: The following directories listed in your path were found to be non-existent: {PosixPath('module'), PosixPath('//ipykernel.pylab.backend_inline')} warn(msg) /usr/local/lib/python3.10/dist-packages/bitsandbytes/cuda_setup/main.py:149: UserWarning: WARNING: No libcudart.so found! Install CUDA or the cudatoolkit package (anaconda)!
  • Having trouble using the multimodal tools.
    1 project | /r/oobaboogazz | 27 Jun 2023
    RuntimeError: CUDA Setup failed despite GPU being available. Inspect the CUDA SETUP outputs above to fix your environment! If you cannot find any issues and suspect a bug, please open an issue with detals about your environment: https://github.com/TimDettmers/bitsandbytes/issues
  • [TextGen WebUI] Service terminated error? (Screenshots in post)
    1 project | /r/Pygmalion_ai | 27 Jun 2023
  • Considering getting a Jetson AGX Orin.. anyone have experience with it?
    5 projects | /r/LocalLLaMA | 26 Jun 2023
  • How to disable the `bitsandbytes` intro message:
    1 project | /r/LocalLLaMA | 23 Jun 2023
    ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes and submit this information together with your error trace to: https://github.com/TimDettmers/bitsandbytes/issues ================================================================================ bin /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda121.so CUDA_SETUP: WARNING! libcudart.so not found in any environmental path. Searching in backup paths... CUDA SETUP: CUDA runtime path found: /usr/local/cuda/lib64/libcudart.so CUDA SETUP: Highest compute capability among GPUs detected: 8.9 CUDA SETUP: Detected CUDA version 121 CUDA SETUP: Loading binary /usr/local/lib/python3.10/dist-packages/bitsandbytes/libbitsandbytes_cuda121.so...

What are some alternatives?

When comparing intel-extension-for-pytorch and bitsandbytes you can also consider the following projects:

llama-cpp-python - Python bindings for llama.cpp

GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ

openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment

accelerate - 🚀 A simple way to launch, train, and use PyTorch models on almost any device and distributed configuration, automatic mixed precision (including fp8), and easy-to-configure FSDP and DeepSpeed support

FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.

ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]

Dreambooth-Stable-Diffusion-cpu - Implementation of Dreambooth (https://arxiv.org/abs/2208.12242) with Stable Diffusion

rocm-examples

alpaca.cpp - Locally run an Instruction-Tuned Chat-Style LLM

stable-diffusion-webui-ipex-arc - A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui

llama.cpp - LLM inference in C/C++