GPTQ-for-LLaMa
bitsandbytes-windows-webui
GPTQ-for-LLaMa | bitsandbytes-windows-webui | |
---|---|---|
19 | 4 | |
129 | 334 | |
- | - | |
7.7 | 8.1 | |
11 months ago | 6 months ago | |
Python | HTML | |
- | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
GPTQ-for-LLaMa
-
I have tried various different methods to install, and none work. Can you spoon-feed me how?
git clone https://github.com/oobabooga/GPTQ-for-LLaMa
-
Query output random text
If you're using the model directly from ehartford, that one hasn't been quantized. Try using the GPTQ quantized version here, and use this fork of GPTQ-for-LLaMa. Load in 4-bit with --wbits 4
-
Help needed with installing quant_cuda for the WebUI
This worked for me on Ubuntu. If you want to use the CUDA branch instead of triton, do the same steps except clone this GPTQ-for-LLaMa fork and run python setup_cuda.py install
-
AutoGPTQ vs GPTQ-for-llama?
If you don't have triton and you use AutoGPTQ you're gonna notice a huge slow down compared to the old GPTQ-for-LLaMA cuda branch. For me AutoGPTQ gives me a whopping 1 token per second compared to the old GPTQ that gives me a decent 9 tokens per second.. both times I used a same sized model. (I think the slowdown is due to AutoGPTQ using the newer cuda branch which is much slower than the old one)
-
Guanaco 7B, 13B, 33B and 65B models by Tim Dettmers: now for your local LLM pleasure
Are you using a later version of GPTQ-for-LLaMa? If so, go to ooba's CUDA fork (https://github.com/oobabooga/GPTQ-for-LLaMa). That's what I made it in and it definitely works with that. And that's what's included in the one-click-installers.
-
Any idea Vicuna 13B 4bit model output random content?
This usually happens when using models that conflict with your GPTQ installation. You should be using this fork: https://github.com/oobabooga/GPTQ-for-LLaMa. If you did the manual installation wrong, use the one click installer instead.
-
GPT4All: A little helper to get started
cd text-generation-webui # wherever you have it installed mkdir -p repositories cd repositories git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda GPTQ-for-LLaMa cd GPTQ-for-LLaMa python setup_cuda install
- wizard-vicuna-13B • Hugging Face
-
Anyone actually running 30b/65b at reasonably high speed? What's your rig?
I'm on GPTQ for LLaMA folder under repositories says it's pointed at https://github.com/oobabooga/GPTQ-for-LLaMa.git. But I've run through the instructions and also applied the monkey patch to train and apply 4 bit lora which may come into play. No idea.
-
Trying to run TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g with latest GPTQ-for-LLaMa CUDA branch
git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda
bitsandbytes-windows-webui
-
Bitsandbytes giving you a cuda error on windows? Don't worry, some guy already compiled it for you.
There's some precompiled ones https://github.com/jllllll/bitsandbytes-windows-webui
- QLoRA: 4-bit finetuning of LLMs is here! With it comes Guanaco, a chatbot on a single GPU, achieving 99% ChatGPT performance on the Vicuna benchmark
-
LORA training runs out of memory on saving
From this: run_cmd("python -m pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl", assert_success=True, environment=True)
-
Error on model load: Torch not compiled with CUDA enabled
set-executionpolicy RemoteSigned -Scope CurrentUser python -m venv venv venv\Scripts\Activate.ps1 pip install torchaudio torch==2.0.0+cu118 torchvision ninja --force-reinstall --extra-index-url https://download.pytorch.org/whl/cu118 pip install https://github.com/jllllll/bitsandbytes-windows-webui/raw/main/bitsandbytes-0.38.1-py3-none-any.whl pip install -r requirements.txt mkdir repositories cd repositories git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda cd GPTQ-for-LLaMa python -m pip install -r requirements.txt python setup_cuda.py install python server.py --chat --model-menu
What are some alternatives?
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
bitsandbytes - 8-bit CUDA functions for PyTorch
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
bitsandbytes-windows - 8-bit CUDA functions for PyTorch in Windows 10
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
one-click-installers - Simplified installers for oobabooga/text-generation-webui.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks
SillyTavern - LLM Frontend for Power Users.
SillyTavern - LLM Frontend for Power Users. [Moved to: https://github.com/SillyTavern/SillyTavern]
Local-LLM-Comparison-Colab-UI - Compare the performance of different LLM that can be deployed locally on consumer hardware. Run yourself with Colab WebUI.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.