setfit
peft
setfit | peft | |
---|---|---|
13 | 26 | |
1,990 | 13,877 | |
3.7% | 3.4% | |
9.2 | 9.7 | |
2 days ago | 1 day ago | |
Jupyter Notebook | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
setfit
- FLaNK Stack 05 Feb 2024
- Smarter Summaries with Finetuning GPT-3.5 and Chain of Density
-
[Discussion] Convince me that this training set contamination is fine (or not)
It did, sorry for the hasty edits! I removed that part b/c I realized that there isn't a compelling-enough reason for me to believe that text similarity is clearly inappropriate. In fact, you can train the Pr(condition | chat) classifier I suggested above using similarity training! Use SetFit for that. In the end you'll get a classifier and a similarity model.
-
Ask HN: What's the best framework for text classification (few-shot learning)?
[3] https://github.com/huggingface/setfit
-
Is it worth using LLMs like GPT-3 for text classification?
There's also kinda related approaches like SetFit which calculate embeddings from pretrained transformer models then then fit a classifier on top of the embeddings. I've yet to try it but it supposedly works well with very few labelled examples.
- LLMs for Text Classification (7B parameters)
- GPT-3 vs GPT-Neo / GPT-J for startup classification
-
Ideas on how to improve classification and scoring using Mean Pooled Sentence Embeddings
You could have a look at setfit.
-
SetFit (Sentence Transformer Fine-tuning) - Fewshot Learning without prompts [D]
Found relevant code at https://github.com/huggingface/setfit + all code implementations here
-
Most Popular AI Research Sept 2022 - Ranked Based On Total GitHub Stars
Efficient Few-Shot Learning Without Prompts https://github.com/huggingface/setfit https://arxiv.org/abs/2209.11055v1
peft
- LoftQ: LoRA-fine-tuning-aware Quantization
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
There is not a lot of great content out there making this clear, but basically all that matters for basic fine tuning is how much VRAM you have -- since the 3090 / 4090 have 24GB VRAM they're both pretty decent fine tuning chips. I think you could probably fine-tune a model up to ~13B parameters on one of them with PEFT (https://github.com/huggingface/peft)
-
Whisper prompt tuning
Hi everyone. Recently I've been looking into the PEFT library (https://github.com/huggingface/peft) and I was wondering if it would be possible to do prompt tuning with OpenAI's Whisper model. They have an example notebook for tuning Whisper with LoRA (https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) but I'm not sure how to go about changing it to use prompt tuning instead.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
- PEFT 0.5 supports fine-tuning GPTQ models
-
Exploding loss when trying to train OpenOrca-Platypus2-13B
image
-
[D] Is there a difference between p-tuning and prefix tuning ?
I discussed part of this here: https://github.com/huggingface/peft/issues/123
-
How does using QLoRAs when running Llama on CPU work?
It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py
-
How to merge the two weights into a single weight?
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
-
[D] [LoRA + weight merge every N step] for pre-training?
you could use a callback, like show here, https://github.com/huggingface/peft/issues/286 and call code to merge them here.
What are some alternatives?
iris - Transformers are Sample-Efficient World Models. ICLR 2023, notable top 5%.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
whisper - Robust Speech Recognition via Large-Scale Weak Supervision
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
VToonify - [SIGGRAPH Asia 2022] VToonify: Controllable High-Resolution Portrait Video Style Transfer
alpaca-lora - Instruct-tune LLaMA on consumer hardware
motion-diffusion-model - The official PyTorch implementation of the paper "Human Motion Diffusion Model"
dalai - The simplest way to run LLaMA on your local machine
git-re-basin - Code release for "Git Re-Basin: Merging Models modulo Permutation Symmetries"
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
storydalle
minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.