intel-extension-for-pytorch
FastChat
Our great sponsors
intel-extension-for-pytorch | FastChat | |
---|---|---|
14 | 82 | |
1,342 | 33,877 | |
9.6% | 6.0% | |
9.7 | 9.6 | |
3 days ago | 2 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
intel-extension-for-pytorch
-
Efficient LLM inference solution on Intel GPU
OK I found it. Looks like they use SYCL (which for some reason they've rebranded to DPC++): https://github.com/intel/intel-extension-for-pytorch/tree/v2...
-
Intel CEO: 'The entire industry is motivated to eliminate the CUDA market'
Just to point out it does, kind of: https://github.com/intel/intel-extension-for-pytorch
I've asked before if they'll merge it back into PyTorch main and include it in the CI, not sure if they've done that yet.
-
Watch out AMD: Intel Arc A580 could be the next great affordable GPU
Intel already has a working GPGPU stack, using oneAPI/SYCL.
They also have arguably pretty good OpenCL support, as well as downstream support for PyTorch and Tensorflow using their custom extensions https://github.com/intel/intel-extension-for-tensorflow and https://github.com/intel/intel-extension-for-pytorch which are actively developed and just recently brought up-to-date with upstream releases.
-
How to run Llama 13B with a 6GB graphics card
https://github.com/intel/intel-extension-for-pytorch :
> Intel® Extension for PyTorch extends PyTorch* with up-to-date features optimizations for an extra performance boost on Intel hardware. Optimizations take advantage of AVX-512 Vector Neural Network Instructions (AVX512 VNNI) and Intel® Advanced Matrix Extensions (Intel® AMX) on Intel CPUs as well as Intel Xe Matrix Extensions (XMX) AI engines on Intel discrete GPUs. Moreover, through PyTorch* xpu device, Intel® Extension for PyTorch* provides easy GPU acceleration for Intel discrete GPUs with PyTorch*
https://pytorch.org/blog/celebrate-pytorch-2.0/ :
> As part of the PyTorch 2.0 compilation stack, TorchInductor CPU backend optimization brings notable performance improvements via graph compilation over the PyTorch eager mode.
The TorchInductor CPU backend is sped up by leveraging the technologies from the Intel® Extension for PyTorch for Conv/GEMM ops with post-op fusion and weight prepacking, and PyTorch ATen CPU kernels for memory-bound ops with explicit vectorization on top of OpenMP-based thread parallelization*
DLRS Deep Learning Reference Stack: https://intel.github.io/stacks/dlrs/index.html
-
Train Lora's on Arc GPUs?
Install intel extensions for pytorch using docker. https://github.com/intel/intel-extension-for-pytorch
- Does it make sense to buy intel arc A770 16gb or AMD RX 7900 XT for machine learning?
-
PyTorch Intel HD Graphics 4600 card compatibility?
There is: https://github.com/intel/intel-extension-for-pytorch for intel cards on GPUs, but I would assume this doesn't extend to integraded graphics
-
Stable Diffusion Web UI for Intel Arc
Nonetheless, this issue might be relevant for your case.
- Does anyone uses Intel Arc A770 GPU for machine learning? [D]
-
Will ROCm finally get some love?
I'm not sure where the disdain for ROCm is coming from, but tensorflow-rocm and the rocm pytorch container were fairly easy to setup and use from scratch once I got the correct Linux kernel installed along with the rest of the necessary ROCm components needed to use tensorflow and pytorch for rocm. TBF Intel Extension for Tensorflow wasn't too bad to setup either (except for the lack of float16 mixed precision training support, that was definitely a pain point to not be able to have), but Intel Extension for Pytorch for Intel GPUs (a.k.a. IPEX-GPU) however, has been a PITA to use for my i5 11400H iGPU NOT because the iGPU itself is slow, BUT because the current i915 driver in the mainline linux kernel simply doesn't work with IPEX-GPU (every script that I've ran ends up freezing when using even the i915 drivers as recent as Kernel version 6), and when I ended up installing drivers that were meant for the Arc GPUs that finally got IPEX-GPUs to work, I ended up with even more issues such as sh*tty FP64 emulation support that basically meant I had to do some really janky workarounds for things to not break while FP64 emulation was enabled (disabling was simply not an option for me, long story short). And yea unlike Intel, both Nvidia AND AMD actually do support FP64 instructions AND FLOAT16 mixed precision training natively on their GPUs so that one doesn't have to worry about running into "unsupported FP64 instructions" and "unsupported training modes" no matter what software they're running on those GPUs.
FastChat
-
LLMs on your local Computer (Part 1)
FastChat
- FLaNK AI for 11 March 2024
- FLaNK 04 March 2024
- ChatGPT for Teams
- FastChat: An open platform for training and serving large language models
-
LM Studio – Discover, download, and run local LLMs
How does it compare with something like FastChat? https://github.com/lm-sys/FastChat
Feature set seems like a decent amount of overlap. One limitation of FastChat, as far as I can tell, is that one is limited to the models that FastChat supports (though I think it would be minor to modify it to support arbitrary models?)
-
Video-LLaVA
Looks like the Vicuna repo is Apache 2.0 also[1].
What's the interpretation of copyright law that would prevent the code being Apache 2.0 based on the source of the fine-tuning dataset?
[1] https://github.com/lm-sys/FastChat
-
🔥🚀 Top 10 Open-Source Must-Have Tools for Crafting Your Own Chatbot 🤖💬
Check how to start with FastChat. Support FastChat on GitHub ⭐
-
Show HN: ChatAPI – PWA to Use ChatGPT by API Build with Alpine.js
For something a little heavier but much more robust in terms of features/functionality I've been enjoying FastChat: https://github.com/lm-sys/FastChat
It allows you to plug in different backends so that you can use OpenAI compatible clients with various LLM's, selfhosted or otherwise.
- FLaNK Stack Weekly 09 Oct 2023
What are some alternatives?
llama-cpp-python - Python bindings for llama.cpp
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
openai-whisper-cpu - Improving transcription performance of OpenAI Whisper for CPU based deployment
llama.cpp - LLM inference in C/C++
ROCm - AMD ROCm™ Software - GitHub Home [Moved to: https://github.com/ROCm/ROCm]
gpt4all - gpt4all: run open-source LLMs anywhere
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
rocm-examples
LocalAI - :robot: The free, Open Source OpenAI alternative. Self-hosted, community-driven and local-first. Drop-in replacement for OpenAI running on consumer-grade hardware. No GPU required. Runs gguf, transformers, diffusers and many more models architectures. It allows to generate Text, Audio, Video, Images. Also with voice cloning capabilities.
stable-diffusion-webui-ipex-arc - A guide to Intel Arc-enabled (maybe) version of @AUTOMATIC1111/stable-diffusion-webui