peft
lamini
peft | lamini | |
---|---|---|
26 | 9 | |
13,877 | 2,414 | |
4.1% | 0.7% | |
9.7 | 7.3 | |
4 days ago | 24 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
peft
- LoftQ: LoRA-fine-tuning-aware Quantization
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
There is not a lot of great content out there making this clear, but basically all that matters for basic fine tuning is how much VRAM you have -- since the 3090 / 4090 have 24GB VRAM they're both pretty decent fine tuning chips. I think you could probably fine-tune a model up to ~13B parameters on one of them with PEFT (https://github.com/huggingface/peft)
-
Whisper prompt tuning
Hi everyone. Recently I've been looking into the PEFT library (https://github.com/huggingface/peft) and I was wondering if it would be possible to do prompt tuning with OpenAI's Whisper model. They have an example notebook for tuning Whisper with LoRA (https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) but I'm not sure how to go about changing it to use prompt tuning instead.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
- PEFT 0.5 supports fine-tuning GPTQ models
-
Exploding loss when trying to train OpenOrca-Platypus2-13B
image
-
[D] Is there a difference between p-tuning and prefix tuning ?
I discussed part of this here: https://github.com/huggingface/peft/issues/123
-
How does using QLoRAs when running Llama on CPU work?
It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py
-
How to merge the two weights into a single weight?
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
-
[D] [LoRA + weight merge every N step] for pre-training?
you could use a callback, like show here, https://github.com/huggingface/peft/issues/286 and call code to merge them here.
lamini
-
[P] Free and Fast LLM Finetuning
Github Repo: https://github.com/lamini-ai/lamini
- Free and Fast LLM Finetuning
-
[P] Lamini rapidly achieves ChatGPT performance with an LLM Engine
The data pipeline here https://github.com/lamini-ai/lamini uses a seed dataset from self-instruct (Apache 2 license), and edited models from Pythia (Apache 2) and Dolly (Apache 2). We release our code and data under a CC-BY 4.0 license.
-
Launch Lamini: The LLM Engine for Rapidly Customizing Models as Good as ChatGPT
Today, you can try out our hosted data generator for training your own LLMs, weights and all, without spinning up any GPUs, in just a few lines of code from the Lamini library. https://github.com/lamini-ai/lamini/
You can play with an open-source LLM, trained on generated data using Lamini. https://huggingface.co/spaces/lamini/instruct-playground
Sign up for early access to the training module that took the generated data and trained it into this LLM, including enterprise features like virtual private cloud (VPC) deployments. https://lamini.ai/contact
-
Seeking Language Project to Join
example: https://github.com/lamini-ai/lamini
What are some alternatives?
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
alpaca-lora - Instruct-tune LLaMA on consumer hardware
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
langchain - âš¡ Building applications with LLMs through composability âš¡ [Moved to: https://github.com/langchain-ai/langchain]
flix - The Flix Programming Language
dalai - The simplest way to run LLaMA on your local machine
otterkit - A free and open source Standard COBOL compiler for 64-bit environments
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
lamini-sql - SQL autocomplete data
minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.
Wave - A cool programming language.