intel-extension-for-pytorch
sparsegpt
Our great sponsors
intel-extension-for-pytorch | sparsegpt | |
---|---|---|
14 | 16 | |
1,342 | 624 | |
9.6% | 9.3% | |
9.7 | 2.4 | |
3 days ago | 22 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
intel-extension-for-pytorch
-
Efficient LLM inference solution on Intel GPU
OK I found it. Looks like they use SYCL (which for some reason they've rebranded to DPC++): https://github.com/intel/intel-extension-for-pytorch/tree/v2...
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
Just to point out it does, kind of: https://github.com/intel/intel-extension-for-pytorch
I've asked before if they'll merge it back into PyTorch main and include it in the CI, not sure if they've done that yet.
-
Watch out AMD: Intel Arc A580 could be the next great affordable GPU
Intel already has a working GPGPU stack, using oneAPI/SYCL.
They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.
-
How to run Llama 13B with a 6GB graphics card
https://github.com/intel/intel-extension-for-pytorch :
> Intel® Extension for PyTorch extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*
https://pytorch.org/blog/celebrate-pytorch-2.0/ :
> As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.
The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP-based thread parallelization*
DLRS Deep Learning Reference Stack: https://intel.github.io/stacks/dlrs/index.html
-
Train Lora's on Arc GPUs?
Install intel extensions for pytorch using docker. https://github.com/intel/intel-extension-for-pytorch
- Does it make sense to buy intel arc A770 16gb or AMD RX 7900 XT for machine learning?
-
PyTorch Intel HD Graphics 4600 card compatibility?
There is: https://github.com/intel/intel-extension-for-pytorch for intel cards on GPUs, but I would assume this doesn't extend to integraded graphics
-
Stable Diffusion Web UI for Intel Arc
Nonetheless, this issue might be relevant for your case.
- Does anyone uses Intel Arc A770 GPU for machine learning? [D]
-
Will ROCm finally get some love?
I'm not sure where the disdain for ROCm is coming from, but tensorflow-rocm and the rocm pytorch container were fairly easy to setup and use from scratch once I got the correct Linux kernel installed along with the rest of the necessary ROCm components needed to use tensorflow and pytorch for rocm. TBF Intel Extension for Tensorflow wasn't too bad to setup either (except for the lack of float16 mixed precision training support, that was definitely a pain point to not be able to have), but Intel Extension for Pytorch for Intel GPUs (a.k.a. IPEX-GPU) however, has been a PITA to use for my i5 11400H iGPU NOT because the iGPU itself is slow, BUT because the current i915 driver in the mainline linux kernel simply doesn't work with IPEX-GPU (every script that I've ran ends up freezing when using even the i915 drivers as recent as Kernel version 6), and when I ended up installing drivers that were meant for the Arc GPUs that finally got IPEX-GPUs to work, I ended up with even more issues such as sh*tty FP64 emulation support that basically meant I had to do some really janky workarounds for things to not break while FP64 emulation was enabled (disabling was simply not an option for me, long story short). And yea unlike Intel, both Nvidia AND AMD actually do support FP64 instructions AND FLOAT16 mixed precision training natively on their GPUs so that one doesn't have to worry about running into "unsupported FP64 instructions" and "unsupported training modes" no matter what software they're running on those GPUs.
sparsegpt
-
(1/2) May 2023
SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot (https://arxiv.org/abs/2301.00774)
- Why Falcon going Apache 2.0 is a BIG deal for all of us.
-
New Open-source LLMs! 🤯 The Falcon has landed! 7B and 40B
There is this : https://github.com/IST-DASLab/sparsegpt
-
Webinar: Running LLMs performantly on CPUs Utilizing Pruning and Quantization
Check the paper here, it's intersting: https://arxiv.org/abs/2301.00774
-
OpenAI chief goes before US Congress to propose licenses for building AI
There's no chance that we've peeked from a bang for buck sense - we still haven't adequately investigated sparse networks.
Relevantish: https://arxiv.org/abs/2301.00774
The fact that we can reach those levels of sparseness with pruning also indicates that we're not doing a very good job of generating the initial network conditions.
Being able to come up with trainable initial settings for sparse networks across different topologies is hard, but given that we've had a degree of success with pre-trained networks, pre-training and pre-pruning might also allow for sparse networks with minimally compromised learning capabilities.
If it's possible to pre-train composable network modules, it might also be feasible to define trainable sparse networks with significantly relaxed topological constraints.
-
How to run Llama 13B with a 6GB graphics card
Training uses gradient descent, so you want to have good precision during that process. But once you have the overall structure of the network, https://arxiv.org/abs/2210.17323 (GPTQ) showed that you can cut down the precision quite a bit without losing a lot of accuracy. It seems you can cut down further for larger models. For the 13B Llama-based ones, going below 5 bit per parameter is noticeably worse, but for 30B models you can do 4 bits.
The same group did another paper https://arxiv.org/abs/2301.00774 which shows that in addition to reducing the precision of each parameter, you can also prune out a bunch of parameters entirely. It's harder to apply this optimization because models are usually loaded into RAM densely, but I hope someone figures out how to do it for popular models.
- SparseGPT: Language Models Can Be Accurately Pruned in One-Shot
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
StableLM - StableLM: Stability AI Language Models
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment
github-copilot-product-specific-terms
FastChat - An open platform for training, serving, and evaluating large language models. Release repo for Vicuna and Chatbot Arena.
promptfoo - Test your prompts, models, and RAGs. Catch regressions and improve prompt quality. LLM evals for OpenAI, Azure, Anthropic, Gemini, Mistral, Llama, Bedrock, Ollama, and other local & private models with CI/CD integration.
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
chat-ui - Open source codebase powering the HuggingChat app
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
geov - The GeoV model is a large langauge model designed by Georges Harik and uses Rotary Positional Embeddings with Relative distances (RoPER). We have shared a pre-trained 9B parameter model.
rocm-examples
coriander - Build NVIDIA® CUDA™ code for OpenCL™ 1.2 devices