SaaSHub helps you find the best software and product alternatives Learn more →
Intel-extension-for-pytorch Alternatives
Similar projects and alternatives to intel-extension-for-pytorch
-
openai-whisper-cpu
Improving transcription performance of OpenAI Whisper for CPU based deployment
-
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
FastChat
An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
-
ROCm
Discontinued AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
-
bitsandbytes
Accessible large language models via k-bit quantization for PyTorch.
-
intel-extension-for-tensorflow
Intel® Extension for TensorFlow*
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
stable-diffusion-webui-ipex-arc
A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui
-
-
Cgml
GPU-targeted vendor-agnostic AI library for Windows, and Mistral model implementation.
-
sparsegpt
Code for the ICML 2023 paper "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot".
-
diffusers
🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch
-
stable-diffusion-tensorflow
Stable Diffusion in TensorFlow / Keras
-
stable-diffusion-webui
Stable Diffusion web UI
-
text-generation-webui
A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
-
-
wonnx
A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
-
coriander
Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
intel-extension-for-pytorch reviews and mentions
-
Efficient LLM inference solution on Intel GPU
OK I found it. Looks like they use SYCL (which for some reason they've rebranded to DPC++): https://github.com/intel/intel-extension-for-pytorch/tree/v2...
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
Just to point out it does, kind of: https://github.com/intel/intel-extension-for-pytorch
I've asked before if they'll merge it back into PyTorch main and include it in the CI, not sure if they've done that yet.
-
Watch out AMD: Intel Arc A580 could be the next great affordable GPU
Intel already has a working GPGPU stack, using oneAPI/SYCL.
They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.
-
How to run Llama 13B with a 6GB graphics card
https://github.com/intel/intel-extension-for-pytorch :
> Intel® Extension for PyTorch extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*
https://pytorch.org/blog/celebrate-pytorch-2.0/ :
> As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.
The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP-based thread parallelization*
DLRS Deep Learning Reference Stack: https://intel.github.io/stacks/dlrs/index.html
-
Train Lora's on Arc GPUs?
Install intel extensions for pytorch using docker. https://github.com/intel/intel-extension-for-pytorch
- Does it make sense to buy intel arc A770 16gb or AMD RX 7900 XT for machine learning?
-
Stable Diffusion Web UI for Intel Arc
Nonetheless, this issue might be relevant for your case.
-
Does anyone uses Intel Arc A770 GPU for machine learning? [D]
Intel publish extensions for PyTorch and Tensorflow. I’ve been working with PyTorch so I just needed to follow these instructions to get everything set up.
-
Will ROCm finally get some love?
I'm not sure where the disdain for ROCm is coming from, but tensorflow-rocm and the rocm pytorch container were fairly easy to setup and use from scratch once I got the correct Linux kernel installed along with the rest of the necessary ROCm components needed to use tensorflow and pytorch for rocm. TBF Intel Extension for Tensorflow wasn't too bad to setup either (except for the lack of float16 mixed precision training support, that was definitely a pain point to not be able to have), but Intel Extension for Pytorch for Intel GPUs (a.k.a. IPEX-GPU) however, has been a PITA to use for my i5 11400H iGPU NOT because the iGPU itself is slow, BUT because the current i915 driver in the mainline linux kernel simply doesn't work with IPEX-GPU (every script that I've ran ends up freezing when using even the i915 drivers as recent as Kernel version 6), and when I ended up installing drivers that were meant for the Arc GPUs that finally got IPEX-GPUs to work, I ended up with even more issues such as sh*tty FP64 emulation support that basically meant I had to do some really janky workarounds for things to not break while FP64 emulation was enabled (disabling was simply not an option for me, long story short). And yea unlike Intel, both Nvidia AND AMD actually do support FP64 instructions AND FLOAT16 mixed precision training natively on their GPUs so that one doesn't have to worry about running into "unsupported FP64 instructions" and "unsupported training modes" no matter what software they're running on those GPUs.
-
A note from our sponsor - SaaSHub
www.saashub.com | 29 Mar 2024
Stats
intel/intel-extension-for-pytorch is an open source project licensed under Apache License 2.0 which is an OSI approved license.
The primary programming language of intel-extension-for-pytorch is Python.
Popular Comparisons
- intel-extension-for-pytorch VS llama-cpp-python
- intel-extension-for-pytorch VS openai-whisper-cpu
- intel-extension-for-pytorch VS FastChat
- intel-extension-for-pytorch VS bitsandbytes
- intel-extension-for-pytorch VS ROCm
- intel-extension-for-pytorch VS rocm-examples
- intel-extension-for-pytorch VS stable-diffusion-webui-ipex-arc
- intel-extension-for-pytorch VS intel-extension-for-tensorflow
- intel-extension-for-pytorch VS sparsegpt
- intel-extension-for-pytorch VS stable-diffusion-tensorflow