llama-dfdx
peft
llama-dfdx | peft | |
---|---|---|
2 | 26 | |
94 | 14,083 | |
- | 5.5% | |
7.3 | 9.7 | |
10 months ago | 2 days ago | |
Rust | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
llama-dfdx
-
rustformers/llm: Run inference for Large Language Models on CPU, with Rust 🦀🚀🦙
Not a maintainer, but dfdx can run llama with CUDA!
-
A brief history of LLaMA models
There's a rust deep learning library called dfdx that just setup llama: https://github.com/coreylowman/llama-dfdx
peft
- LoftQ: LoRA-fine-tuning-aware Quantization
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
There is not a lot of great content out there making this clear, but basically all that matters for basic fine tuning is how much VRAM you have -- since the 3090 / 4090 have 24GB VRAM they're both pretty decent fine tuning chips. I think you could probably fine-tune a model up to ~13B parameters on one of them with PEFT (https://github.com/huggingface/peft)
-
Whisper prompt tuning
Hi everyone. Recently I've been looking into the PEFT library (https://github.com/huggingface/peft) and I was wondering if it would be possible to do prompt tuning with OpenAI's Whisper model. They have an example notebook for tuning Whisper with LoRA (https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) but I'm not sure how to go about changing it to use prompt tuning instead.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
- PEFT 0.5 supports fine-tuning GPTQ models
-
Exploding loss when trying to train OpenOrca-Platypus2-13B
image
-
[D] Is there a difference between p-tuning and prefix tuning ?
I discussed part of this here: https://github.com/huggingface/peft/issues/123
-
How does using QLoRAs when running Llama on CPU work?
It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py
-
How to merge the two weights into a single weight?
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
-
[D] [LoRA + weight merge every N step] for pre-training?
you could use a callback, like show here, https://github.com/huggingface/peft/issues/286 and call code to merge them here.
What are some alternatives?
llm - An ecosystem of Rust libraries for working with large language models
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
LLaMA_MPS - Run LLaMA inference on Apple Silicon GPUs.
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
dalai - The simplest way to run LLaMA on your local machine
alpaca-lora - Instruct-tune LLaMA on consumer hardware
wonnx - A WebGPU-accelerated ONNX inference run-time written 100% in Rust, ready for native and the web
text-generation-webui - A Gradio web UI for Large Language Models. Supports transformers, GPTQ, AWQ, EXL2, llama.cpp (GGUF), Llama models.
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
web-llm - Bringing large-language models and chat to web browsers. Everything runs inside the browser with no server support.
minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.