peft
GPTQ-for-LLaMa
peft | GPTQ-for-LLaMa | |
---|---|---|
26 | 19 | |
13,877 | 129 | |
4.1% | - | |
9.7 | 7.7 | |
4 days ago | 11 months ago | |
Python | Python | |
Apache License 2.0 | - |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
peft
- LoftQ: LoRA-fine-tuning-aware Quantization
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
There is not a lot of great content out there making this clear, but basically all that matters for basic fine tuning is how much VRAM you have -- since the 3090 / 4090 have 24GB VRAM they're both pretty decent fine tuning chips. I think you could probably fine-tune a model up to ~13B parameters on one of them with PEFT (https://github.com/huggingface/peft)
-
Whisper prompt tuning
Hi everyone. Recently I've been looking into the PEFT library (https://github.com/huggingface/peft) and I was wondering if it would be possible to do prompt tuning with OpenAI's Whisper model. They have an example notebook for tuning Whisper with LoRA (https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) but I'm not sure how to go about changing it to use prompt tuning instead.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
- PEFT 0.5 supports fine-tuning GPTQ models
-
Exploding loss when trying to train OpenOrca-Platypus2-13B
image
-
[D] Is there a difference between p-tuning and prefix tuning ?
I discussed part of this here: https://github.com/huggingface/peft/issues/123
-
How does using QLoRAs when running Llama on CPU work?
It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py
-
How to merge the two weights into a single weight?
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
-
[D] [LoRA + weight merge every N step] for pre-training?
you could use a callback, like show here, https://github.com/huggingface/peft/issues/286 and call code to merge them here.
GPTQ-for-LLaMa
-
I have tried various different methods to install, and none work. Can you spoon-feed me how?
git clone https://github.com/oobabooga/GPTQ-for-LLaMa
-
Query output random text
If you're using the model directly from ehartford, that one hasn't been quantized. Try using the GPTQ quantized version here, and use this fork of GPTQ-for-LLaMa. Load in 4-bit with --wbits 4
-
Help needed with installing quant_cuda for the WebUI
This worked for me on Ubuntu. If you want to use the CUDA branch instead of triton, do the same steps except clone this GPTQ-for-LLaMa fork and run python setup_cuda.py install
-
AutoGPTQ vs GPTQ-for-llama?
If you don't have triton and you use AutoGPTQ you're gonna notice a huge slow down compared to the old GPTQ-for-LLaMA cuda branch. For me AutoGPTQ gives me a whopping 1 token per second compared to the old GPTQ that gives me a decent 9 tokens per second.. both times I used a same sized model. (I think the slowdown is due to AutoGPTQ using the newer cuda branch which is much slower than the old one)
-
Guanaco 7B, 13B, 33B and 65B models by Tim Dettmers: now for your local LLM pleasure
Are you using a later version of GPTQ-for-LLaMa? If so, go to ooba's CUDA fork (https://github.com/oobabooga/GPTQ-for-LLaMa). That's what I made it in and it definitely works with that. And that's what's included in the one-click-installers.
-
Any idea Vicuna 13B 4bit model output random content?
This usually happens when using models that conflict with your GPTQ installation. You should be using this fork: https://github.com/oobabooga/GPTQ-for-LLaMa. If you did the manual installation wrong, use the one click installer instead.
-
GPT4All: A little helper to get started
cd text-generation-webui # wherever you have it installed mkdir -p repositories cd repositories git clone https://github.com/oobabooga/GPTQ-for-LLaMa -b cuda GPTQ-for-LLaMa cd GPTQ-for-LLaMa python setup_cuda install
- wizard-vicuna-13B • Hugging Face
-
Anyone actually running 30b/65b at reasonably high speed? What's your rig?
I'm on GPTQ for LLaMA folder under repositories says it's pointed at https://github.com/oobabooga/GPTQ-for-LLaMa.git. But I've run through the instructions and also applied the monkey patch to train and apply 4 bit lora which may come into play. No idea.
-
Trying to run TheBloke/vicuna-13B-1.1-GPTQ-4bit-128g with latest GPTQ-for-LLaMa CUDA branch
git clone https://github.com/oobabooga/GPTQ-for-LLaMa.git -b cuda
What are some alternatives?
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
exllama - A more memory-efficient rewrite of the HF transformers implementation of Llama for use with quantized weights.
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
koboldcpp - A simple one-file way to run various GGML and GGUF models with KoboldAI's UI
alpaca-lora - Instruct-tune LLaMA on consumer hardware
langflow - ⛓️ Langflow is a dynamic graph where each node is an executable unit. Its modular and interactive design fosters rapid experimentation and prototyping, pushing hard on the limits of creativity.
dalai - The simplest way to run LLaMA on your local machine
GPTQ-for-LLaMa - 4 bits quantization of LLaMA using GPTQ
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
one-click-installers - Simplified installers for oobabooga/text-generation-webui.
minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.
private-gpt - Interact with your documents using the power of GPT, 100% privately, no data leaks