SHARK
GPTQ-for-LLaMa
SHARK | GPTQ-for-LLaMa | |
---|---|---|
84 | 75 | |
1,394 | 2,928 | |
2.2% | - | |
9.4 | 8.6 | |
about 11 hours ago | 10 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
SHARK
- Llama 2 on ONNX runs locally
-
[D] Confusion over AMD GPU Ai benchmarking
https://github.com/AUTOMATIC1111/stable-diffusion-webui, https://github.com/nod-ai/SHARK, those are the repos for the open source tools mentioned. u/CeFurkan has really nice tutorial videos on YouTube for stable diffusion. Automatic1111 is the most popular open source stable diffusion ui and has the biggest open source plug-in ecosystem currently. Nvidia’s compute driver is separate from normal driver and called cuda. Amd’s compute driver is called rocm. Most windows programs like games use apis like directx, Vulkan,metal, web gpu and not cuda. Most ml code was originally intended to run in on scientific computing systems that were Linux. Today the traditional windows gpu apis are tying to get better at gpu ml supports. Amd has no official windows ml code support and is Hoping that other developers figure it out for them but amd made their ml driver open source but no support for consumer graphics cards. Nvidia is proprietary ml driver but guaranteed support across all cards including consumer
-
Amd Gpu not utilised
I got it working using SHARK with an AMD RX 480 on Windows 10.
-
New to SD - Slow working
Here the link for shark, faster (uses vulkan) than automatic1111 with directml but has less functions https://github.com/nod-ai/SHARK
-
7900 XTX Stable Diffusion Shark Nod Ai performance on Windows 10. Seem to have gotten a bump with the latest prerelease drivers 23.10.01.41
I would recommend trying out Nod AI's Shark (That is the link for the most recent 786.exe release), and see how it works for you. From others I've read, it does 512x512 pics at around 3 it/s, which I know isn't mind blowing, but it's good enough to do a pic in about 30 seconds.
-
New here
Problem solve, i had it to work i simply put this nod's ai shark exe in my stabble diffusion folder and launch it instead of Webui-user -> Release nod.ai SHARK 20230623.786 · nod-ai/SHARK (github.com)
-
I built the easiest-to-use desktop application for running Stable Diffusion on your PC - and it's free for all of you
How does it compare with Shark SD (I am not affiliated with it in any way)? (https://github.com/nod-ai/SHARK)
-
after changing GPU from RX 470 4gb to RTX 3060 12GB, I decided to make a few cozy houses, and these are a few of them
you should if you want to run SD on your card https://github.com/nod-ai/SHARK
-
20 minute load time per image on high end pc?
Forgive me for not reading you whole comment. I suspect you're version of the SD eb UI doesn't recognize the AMD GPU., so you're using the CPU. AMD GPUs only work with a few web UIs. Try Nod.ai's Shark variant
- AMD support for Microsoft® DirectML optimization of Stable Diffusion
GPTQ-for-LLaMa
-
[P] Early in 2023 I put in a lot of work on a new machine learning project. Now I'm not sure what to do with it.
First I want to make it clear this is not a self promotion post. I hope many machine learning people come at me with questions or comments about this project. A little background about myself. I did work on the 4 bits quantization of LLaMA using GPTQ. (https://github.com/qwopqwop200/GPTQ-for-LLaMa). I've been studying AI in-depth for many years now.
-
GPT-4 Details Leaked
Deploying the 60B version is a challenge though and you might need to apply 4-bit quantization with something like https://github.com/PanQiWei/AutoGPTQ or https://github.com/qwopqwop200/GPTQ-for-LLaMa . Then you can improve the inference speed by using https://github.com/turboderp/exllama .
If you prefer to use an "instruct" model à la ChatGPT (i.e. that does not need few-shot learning to output good results) you can use something like this: https://huggingface.co/TheBloke/Wizard-Vicuna-30B-Uncensored...
-
Rambling
I use gptq-for-llama - from this https://github.com/qwopqwop200/GPTQ-for-LLaMa and Pygmalion 7B.
-
Now that ExLlama is out with reduced VRAM usage, are there any GPTQ models bigger than 7b which can fit onto an 8GB card?
exllama is an optimized implementation of GPTQ-for-LLaMa, allowing you to run 4-bit quantized language models with GPU at great speeds.
-
GGML – AI at the Edge
With a single NVIDIA 3090 and the fastest inference branch of GPTQ-for-LLAMA https://github.com/qwopqwop200/GPTQ-for-LLaMa/tree/fastest-i..., I get a healthy 10-15 tokens per second on the 30B models. IMO GGML is great (And I totally use it) but it's still not as fast as running the models on GPU for now.
-
New quantization method AWQ outperforms GPTQ in 4-bit and 3-bit with 1.45x speedup and works with multimodal LLMs
And exactly what Triton version are they comparing against? I just tried the latest version of this, and on my 4090/12900K I get 77 tokens per second for Llama 7B-128g. My own GPTQ CUDA implementation gets 151 tokens/second on the same model, same hardware. That makes it 96% faster, whereas AWQ is only 79% faster. For 30B-128g I'm currently only getting a 110% speedup over Triton compared to their 178%, but it still seems a little disingenuous to compare against their own CUDA implementation only, when they're trying to present the quantization method as being faster for inference.
-
Introducing Basaran: self-hosted open-source alternative to the OpenAI text completion API
Thanks for the explanation. I think some repos, like text generation webui used gptq for llama (I don't know if it's this repo or another one), anyway most repo that I saw use external things (like gptq for llama)
-
How to use AMD GPU?
cd ../.. git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git -b triton cd GPTQ-for-LLaMa pip install -r requirements.txt mkdir -p ../text-generation-webui/repositories ln -s ../../GPTQ-for-LLaMa ../text-generation-webui/repositories/GPTQ-for-LLaMa
-
Help needed with installing quant_cuda for the WebUI
cd repositories git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa pip install -r requirements.txt
-
The installed version of bitsandbytes was compiled without GPU support
# To use the GPTQ models I need to Install GPTQ-for-LLaMa and the monkey patch mkdir repositories cd repositories git clone https://github.com/qwopqwop200/GPTQ-for-LLaMa.git -b triton cd GPTQ-for-LLaMa pip install ninja pip install -r requirements.txt cd cd text-generation-webui # download random model python download-model.py xxx/yyy # try to start the gui python server.py # It returns this warning but it runs bin /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/cextension.py:34: UserWarning: The installed version of bitsandbytes was compiled without GPU support. 8-bit optimizers, 8-bit multiplication, and GPU quantization are unavailable. warn("The installed version of bitsandbytes was compiled without GPU support. " /home/gm/miniconda3/envs/chat/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cpu.so: undefined symbol: cadam32bit_grad_fp32
What are some alternatives?
stable-diffusion-webui - Stable Diffusion web UI
llama.cpp - LLM inference in C/C++
stable-diffusion-webui-amdgpu - Stable Diffusion web UI
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
automatic - SD.Next: Advanced Implementation of Stable Diffusion and other Diffusion-based generative image models
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
xformers - Hackable and optimized Transformers building blocks, supporting a composable construction.
qlora - QLoRA: Efficient Finetuning of Quantized LLMs
AMD-Stable-Diffusion-ONNX-FP16 - Example code and documentation on how to get FP16 models running with ONNX on AMD GPUs [Moved to: https://github.com/Amblyopius/Stable-Diffusion-ONNX-FP16]
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
ComfyUI - The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface.
stable-diffusion-webui-docker - Easy Docker setup for Stable Diffusion with user-friendly UI