intel-extension-for-tensorflow
intel-extension-for-pytorch
intel-extension-for-tensorflow | intel-extension-for-pytorch | |
---|---|---|
9 | 16 | |
303 | 1,365 | |
-0.3% | 4.9% | |
9.6 | 9.7 | |
3 days ago | 1 day ago | |
C++ | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
intel-extension-for-tensorflow
-
Watch out AMD: Intel Arc A580 could be the next great affordable GPU
Intel already has a working GPGPU stack, using oneAPI/SYCL.
They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.
-
How do you allocate more than 4GB of memory for OpenCL in A770 16GB?
I tried Intel® Extension for PyTorch* v1.13.10+xpu and intel-extension-for-tensorflow
-
I'm really happy with the card although the Ti version offers much better performance
Yeah I recently stubbled on it when I was looking into buying a 16gb a770 and wondering what was possible now. GitHub Intel extension for tensorflow
-
Does anyone uses Intel Arc A770 GPU for machine learning? [D]
Intel publish extensions for PyTorch and Tensorflow. I’ve been working with PyTorch so I just needed to follow these instructions to get everything set up.
- Intel Extension for TensorFlow
- Intel Extension for TensorFlow Released
-
SD on intel arc?
Actually I was just on GitHub trying to submit issues related to me testing Intel's PyTorch and Tensorflow extensions when I saw this; it seems that someone has already ported SD over to the tensorflow framework and so you can probably start using intel's extension for tensorflow with it immediately; and according to this article you can use Intel's extension within WSL under windows as well. But unfortunately given how the guy whose issue I linked to has been facing pretty serious performance issues of inferencing taking many minutes longer than it should when using an A770 to do SD-related inferencing, you might be better off waiting for intel's extension for tensorflow versions 1.2 and greater or something like that, so that when it's your turn to use it, Intel has already ironed out most of the major bugs within the software :)
intel-extension-for-pytorch
-
Intel Arc A770: Arrays larger than 4GB crashes
I have been playing around in pytorch with an a770 16GB card and hit this error. The response seems to be https://github.com/intel/intel-extension-for-pytorch/issues/... that larger than 4gb allocations aren't supported even though the card is 16gb. I haven't seen a ton of stuff on intel arc for machine learning so wanted to share my experience
-
Efficient LLM inference solution on Intel GPU
OK I found it. Looks like they use SYCL (which for some reason they've rebranded to DPC++): https://github.com/intel/intel-extension-for-pytorch/tree/v2...
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
Just to point out it does, kind of: https://github.com/intel/intel-extension-for-pytorch
I've asked before if they'll merge it back into PyTorch main and include it in the CI, not sure if they've done that yet.
-
Watch out AMD: Intel Arc A580 could be the next great affordable GPU
Intel already has a working GPGPU stack, using oneAPI/SYCL.
They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.
-
How to run Llama 13B with a 6GB graphics card
https://github.com/intel/intel-extension-for-pytorch :
> Intel® Extension for PyTorch extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*
https://pytorch.org/blog/celebrate-pytorch-2.0/ :
> As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.
The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP-based thread parallelization*
DLRS Deep Learning Reference Stack: https://intel.github.io/stacks/dlrs/index.html
-
Train Lora's on Arc GPUs?
Install intel extensions for pytorch using docker. https://github.com/intel/intel-extension-for-pytorch
- Does it make sense to buy intel arc A770 16gb or AMD RX 7900 XT for machine learning?
-
PyTorch Intel HD Graphics 4600 card compatibility?
There is: https://github.com/intel/intel-extension-for-pytorch for intel cards on GPUs, but I would assume this doesn't extend to integraded graphics
-
Stable Diffusion Web UI for Intel Arc
Nonetheless, this issue might be relevant for your case.
- Does anyone uses Intel Arc A770 GPU for machine learning? [D]
What are some alternatives?
stable-diffusion-tensorflow - Stable Diffusion in TensorFlow / Keras
llama-cpp-python - Python bindings for llama.cpp
FluidX3D - The fastest and most memory efficient lattice Boltzmann CFD software, running on all GPUs via OpenCL.
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment
OpenCL-Wrapper - OpenCL is the most powerful programming language ever created. Yet the OpenCL C++ bindings are cumbersome and the code overhead prevents many people from getting started. I created this lightweight OpenCL-Wrapper to greatly simplify OpenCL software development with C++ while keeping functionality and performance.
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
compute-runtime - Intel® Graphics Compute Runtime for oneAPI Level Zero and OpenCL™ Driver
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
diffusers - 🤗 Diffusers: State-of-the-art diffusion models for image and audio generation in PyTorch and FLAX.
rocm-examples