accelerate
peft
accelerate | peft | |
---|---|---|
18 | 26 | |
6,996 | 13,877 | |
2.9% | 3.4% | |
9.7 | 9.7 | |
1 day ago | 1 day ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
accelerate
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
accelerate is a best-in-class lib for deploying models, especially across multi-gpu and multi-node.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
-
What are the current fastest multi-gpu inference frameworks?
So I rent a cloud server today to try out some of the recent LLMs like falcon and vicuna. I started with huggingface's generate API using accelerate. It got about 2 instances/s with 8 A100 40GB GPUs which I think is a bit slow. I was using batch size = 1 since I do not know how to do multi-batch inference using the .generate API. I did torch.compile + bf16 already. Do we have an even faster multi-gpu inference framework? I have 8 GPUs so I was thinking about MUCH faster speed like ~10 or 20 instances per second (or is it possible at all? I am pretty new to this field).
-
Looking at lefnire's suggestion of splitting huggingface batches per gradient_accumulation_steps
Looking through https://github.com/huggingface/accelerate/tree/main/src/accelerate/utils/ I think it might be feasible, but will require some modifications to:
-
Have to abandon my (almost) finished LLaMA-API-Inference server. If anybody finds it useful and wants to continue, the repo is yours. :)
As /u/RabbitHole32 already mentioned, the speed increase stems from a patch which modifies, how a certain, large tensor is distributed between the GPU's. The patch was created by /u/emvw7yf. Here you can find the respective GitHub issue: https://github.com/huggingface/accelerate/issues/1394
-
Help please! SD installation broken
::pip install git+https://github.com/huggingface/accelerate
-
Batch Controlnet
pip install controlnet_aux pip install diffusers transformers git+https://github.com/huggingface/accelerate.git
-
[D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM
Try to use both GPUs with this one: https://github.com/huggingface/accelerate https://huggingface.co/docs/accelerate/usage_guides/big_modeling https://huggingface.co/blog/accelerate-large-models Maybe it will help (the last link is clearer IMHO).
-
Fine Tuning Stable Diffusion with Dreambooth from Within My Python Code
I read through this page on accelerate, but it's not clear to me how the arguments such as instance_prompt gets passed in.
-
What does ACCELERATE do in AUTOMATIC1111?
To activate it you have to uncomment webui-user.sh line 44 and adding set ACCELERATE="True" to webui-user.bat. It seems to use huggingface/accelerate (Microsoft DeepSpeed, ZeRO paper) ACCELERATE
peft
- LoftQ: LoRA-fine-tuning-aware Quantization
-
Fine Tuning Mistral 7B on Magic the Gathering Draft
There is not a lot of great content out there making this clear, but basically all that matters for basic fine tuning is how much VRAM you have -- since the 3090 / 4090 have 24GB VRAM they're both pretty decent fine tuning chips. I think you could probably fine-tune a model up to ~13B parameters on one of them with PEFT (https://github.com/huggingface/peft)
-
Whisper prompt tuning
Hi everyone. Recently I've been looking into the PEFT library (https://github.com/huggingface/peft) and I was wondering if it would be possible to do prompt tuning with OpenAI's Whisper model. They have an example notebook for tuning Whisper with LoRA (https://colab.research.google.com/drive/1vhF8yueFqha3Y3CpTHN6q9EVcII9EYzs?usp=sharing) but I'm not sure how to go about changing it to use prompt tuning instead.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
- PEFT 0.5 supports fine-tuning GPTQ models
-
Exploding loss when trying to train OpenOrca-Platypus2-13B
image
-
[D] Is there a difference between p-tuning and prefix tuning ?
I discussed part of this here: https://github.com/huggingface/peft/issues/123
-
How does using QLoRAs when running Llama on CPU work?
It seems like the merge_and_unload function in this PEFT script might be what they are referring to: https://github.com/huggingface/peft/blob/main/src/peft/tuners/lora.py
-
How to merge the two weights into a single weight?
To obtain the original llama model, one may refer to this doc. To merge a lora model with a base model, one may refer to PEFT or use the merge script provided by LMFlow.
-
[D] [LoRA + weight merge every N step] for pre-training?
you could use a callback, like show here, https://github.com/huggingface/peft/issues/286 and call code to merge them here.
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
lora - Using Low-rank adaptation to quickly fine-tune diffusion models.
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
LoRA - Code for loralib, an implementation of "LoRA: Low-Rank Adaptation of Large Language Models"
FlexGen - Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen]
alpaca-lora - Instruct-tune LLaMA on consumer hardware
horovod - Distributed training framework for TensorFlow, Keras, PyTorch, and Apache MXNet.
dalai - The simplest way to run LLaMA on your local machine
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
mlc-llm - Enable everyone to develop, optimize and deploy AI models natively on everyone's devices.
unsloth - Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
minLoRA - minLoRA: a minimal PyTorch library that allows you to apply LoRA to any PyTorch model.