peft
minLoRA
peft | minLoRA | |
---|---|---|
26 | 3 | |
13,877 | 388 | |
4.1% | - | |
9.7 | 2.4 | |
4 days ago | 11 months ago | |
Python | Jupyter Notebook | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
peft
- LoftQ: LoRA-fine-tuning-aware Quantization
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
There is not a lot of great content out there making this clear, but basically all that matters for basic fine tuning is how much VRAM you have -- since the 3090 / 4090 have 24GB VRAM they're both pretty decent fine tuning chips. I think you could probably fine-tune a model up to ~13B parameters on one of them with PEFT (https://github.com/huggingface/peft)
-
Whisper prompt tuning
Hi everyone. Recently I've been looking into the PEFT library (https://github.com/huggingface/peft) and I was wondering if it would be possible to do prompt tuning with OpenAI's Whisper model. They have an example notebook for tuning Whisper with LoRA (https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) but I'm not sure how to go about changing it to use prompt tuning instead.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
- PEFT 0.5 supports fine-tuning GPTQ models
-
Exploding loss when trying to train OpenOrca-Platypus2-13B
image
-
[D] Is there a difference between p-tuning and prefix tuning ?
I discussed part of this here: https://github.com/huggingface/peft/issues/123
-
How does using QLoRAs when running Llama on CPU work?
It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py
-
How to merge the two weights into a single weight?
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
-
[D] [LoRA + weight merge every N step] for pre-training?
you could use a callback, like show here, https://github.com/huggingface/peft/issues/286 and call code to merge them here.
minLoRA
-
[D] Is it possible to train the same LLM instance on different users' data?
This repository seems to be doing it. Basically, you want to take the weights/biases that were trained during the LoRA training process and include them in the compute graph for the larger network, or remove them.
-
[P] minLoRA: An Easy-to-Use PyTorch Library for Applying LoRA to PyTorch Models
Theirs requires you to rewrite the whole model and replace every layer you want to apply LoRA to to the LoRA counterpart, or use monky-patching. Mine utilizes PyTorch parametrizations to inject the LoRA logic to existing models. If your model has nn.Linear, you can call add_lora(model) to add LoRA to all the linear layers. And it's not limited to Linear, you can see how I extended it to Embedding, Conv2d in a couple lines of code. https://github.com/cccntu/minLoRA/blob/main/minlora/model.py
What are some alternatives?
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
GTSRB - Convolutional Neural Network for German Traffic Sign Recognition Benchmark
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
alpaca-lora - Instruct-tune LLaMA on consumer hardware
dalai - The simplest way to run LLaMA on your local machine
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
lamini
simple-llm-finetuner - Simple UI for LLM Model Finetuning
alpaca_lora_4bit
transformers - 🤗 Transformers: State-of-the-art Machine Learning for Pytorch, TensorFlow, and JAX.
SwapCudaVersionWindows - How to swap/switch CUDA versions on Windows
PorousMediaLab - PorousMediaLab - toolbox for batch and 1D reactive transport modelling