GPTQ-for-LLaMa
bitsandbytes-rocm
GPTQ-for-LLaMa | bitsandbytes-rocm | |
---|---|---|
19 | 4 | |
129 | 38 | |
- | - | |
7.7 | 8.8 | |
11 months ago | 12 months ago | |
Python | Python | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-for-LLaMa
-
I have tried various different methods to install, and none work. Can you spoon-feed me how?
git clone https://github.com/oobabooga/GPTQ-for-LLaMa
-
Query output random text
If you're using the model directly from ehartford, that one hasn't been quantized. Try using the GPTQ quantized version here, and use this fork of GPTQ-for-LLaMa. Load in 4-bit with --wbits 4
-
Help needed with installing quant_cuda for the WebUI
This worked for me on Ubuntu. If you want to use the CUDA branch instead of triton, do the same steps except clone this GPTQ-for-LLaMa fork and run python setup_cuda.py install
-
AutoGPTQ vs GPTQ-for-llama?
If you don't have triton and you use AutoGPTQ you're gonna notice a huge slow down compared to the old GPTQ-for-LLaMA cuda branch. For me AutoGPTQ gives me a whopping 1 token per second compared to the old GPTQ that gives me a decent 9 tokens per second.. both times I used a same sized model. (I think the slowdown is due to AutoGPTQ using the newer cuda branch which is much slower than the old one)
-
Guanaco 7B, 13B, 33B and 65B models by Tim Dettmers: now for your local LLM pleasure
Are you using a later version of GPTQ-for-LLaMa? If so, go to ooba's CUDA fork (https://github.com/oobabooga/GPTQ-for-LLaMa). That's what I made it in and it definitely works with that. And that's what's included in the one-click-installers.
-
Any idea Vicuna 13B 4bit model output random content?
This usually happens when using models that conflict with your GPTQ installation. You should be using this fork: https://github.com/oobabooga/GPTQ-for-LLaMa. If you did the manual installation wrong, use the one click installer instead.
-
GPT4All: A little helper to get started
cd text-generation-webui # wherever you have it installed mkdir -p repositories cd repositories git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda GPTQ-for-LLaMa cd GPTQ-for-LLaMa python setup_cuda install
- wizard-vicuna-13B • Hugging Face
-
Anyone actually running 30b/65b at reasonably high speed? What's your rig?
I'm on GPTQ for LLaMA folder under repositories says it's pointed at https://github.com/oobabooga/GPTQ-for-LLaMa.git. But I've run through the instructions and also applied the monkey patch to train and apply 4 bit lora which may come into play. No idea.
-
Trying to run TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g with latest GPTQ-for-LLaMa CUDA branch
git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda
bitsandbytes-rocm
-
Any methods to train using AMD?
Install dependencies like hipblas-devel hipsparse-devel hipcub-devel git python3.10 make libstdc++-devel accelerate, rocm and hip git clone https://github.com/bmaltais/kohya_ss && cd kohya_ss python3.10 -m venv venv source venv/bin/activate pip3 install torch==1.13.1 torchvision==0.14.1 torchtext==0.14.1 torchaudio==0.13.1 --index-url https://download.pytorch.org/whl/rocm5.2 # problems on 2.0.0 last I tried, but kohya gotten updates since then pip3 install --upgrade -r requirements.txt pip3 uninstall tensorflow && pip3 install tensorflow-rocm pip uninstall bitsandbytes && git clone https://github.com/broncotc/bitsandbytes-rocm # bitsandbytes not required if not using adam8? cd bitesandbytes-rocm && nano Makefile # Replace all 3 instances of 5.3.0 with 5.4.3 make hip python3 setup.py install
-
How to run Pygmalion on 4.5GB of VRAM with full context size.
There are a lot of ROCm versions of bitsandbytes. For example this one: https://github.com/broncotc/bitsandbytes-rocm The problem is compatibility with most of the requirements. Kobold does a better job than ooba in offering a more streamlined approach for AMD users.
- Is it possible to load a model in 8bit precision with an AMD card? (6700xt)
- Have you got running LoRA training on an AMD GPU?
What are some alternatives?
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
sd-scripts
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
GPTQ-for-LLaMa - 4 bits quantization of LLMs using GPTQ
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
one-click-installers - Simplified installers for oobabooga/text-generation-webui.
LoRA_Easy_Training_Scripts - A UI made in Pyside6 to make training LoRA/LoCon and other LoRA type models in sd-scripts easy
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
LyCORIS - Lora beYond Conventional methods, Other Rank adaptation Implementations for Stable diffusion.