accelerate
horovod
accelerate | horovod | |
---|---|---|
18 | 8 | |
6,996 | 13,952 | |
2.9% | 0.4% | |
9.7 | 5.2 | |
1 day ago | about 1 month ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
accelerate
-
Can we discuss MLOps, Deployment, Optimizations, and Speed?
accelerate is a best-in-class lib for deploying models, especially across multi-gpu and multi-node.
-
Code Llama - The Hugging Face Edition
In the coming days, we'll work on sharing scripts to train models, optimizations for on-device inference, even nicer demos (and for more powerful models), and more. Feel free to like our GitHub repos (transformers, peft, accelerate). Enjoy!
-
What are the current fastest multi-gpu inference frameworks?
So I rent a cloud server today to try out some of the recent LLMs like falcon and vicuna. I started with huggingface's generate API using accelerate. It got about 2 instances/s with 8 A100 40GB GPUs which I think is a bit slow. I was using batch size = 1 since I do not know how to do multi-batch inference using the .generate API. I did torch.compile + bf16 already. Do we have an even faster multi-gpu inference framework? I have 8 GPUs so I was thinking about MUCH faster speed like ~10 or 20 instances per second (or is it possible at all? I am pretty new to this field).
-
Looking at lefnire's suggestion of splitting huggingface batches per gradient_accumulation_steps
Looking through https://github.com/huggingface/accelerate/tree/main/src/accelerate/utils/ I think it might be feasible, but will require some modifications to:
-
Have to abandon my (almost) finished LLaMA-API-Inference server. If anybody finds it useful and wants to continue, the repo is yours. :)
As /u/RabbitHole32 already mentioned, the speed increase stems from a patch which modifies, how a certain, large tensor is distributed between the GPU's. The patch was created by /u/emvw7yf. Here you can find the respective GitHub issue: https://github.com/huggingface/accelerate/issues/1394
-
Help please! SD installation broken
::pip install git+https://github.com/huggingface/accelerate
-
Batch Controlnet
pip install controlnet_aux pip install diffusers transformers git+https://github.com/huggingface/accelerate.git
-
[D] Large Language Models feasible to run on 32GB RAM / 8 GB VRAM / 24GB VRAM
Try to use both GPUs with this one: https://github.com/huggingface/accelerate https://huggingface.co/docs/accelerate/usage_guides/big_modeling https://huggingface.co/blog/accelerate-large-models Maybe it will help (the last link is clearer IMHO).
-
Fine Tuning Stable Diffusion with Dreambooth from Within My Python Code
I read through this page on accelerate, but it's not clear to me how the arguments such as instance_prompt gets passed in.
-
What does ACCELERATE do in AUTOMATIC1111?
To activate it you have to uncomment webui-user.sh line 44 and adding set ACCELERATE="True" to webui-user.bat. It seems to use huggingface/accelerate (Microsoft DeepSpeed, ZeRO paper) ACCELERATE
horovod
-
Discussion Thread
Broke: using Horovod
-
[D] What is the recommended approach to training NN on big data set?
And in case scaling is really important to you. May I suggest you look into Horovod?
-
Anyone know of any papers or models for segmenting satellite images of a city into things like roads, buildings, parks, etc?
Training is not the same as inference (doing the segmentation), so that scale is probably off by a lot. One or two orders of magnitude just depending on the specifics of what hardware you're running on, and your training and eval dataset would be several orders of magnitude smaller. FAANGs would parallelize that training as well (don't remember if UNet is inherently parallelizable for training) via their internal equivalent of Horovod, so they'll do a GPU-month worth of training in less than a day.
-
Embedding Python
[[email protected]] match_arg (utils/args/args.c:163): unrecognized argument quiet [[email protected]] HYDU_parse_array (utils/args/args.c:178): argument matching returned error [[email protected]] parse_args (ui/mpich/utils.c:1639): error parsing input array [[email protected]] HYD_uii_mpx_get_parameters (ui/mpich/utils.c:1691): unable to parse user arguments [[email protected]] main (ui/mpich/mpiexec.c:127): error parsing parameters I believe this is due to mpich being installed: https://github.com/horovod/horovod/issues/1637
-
[D] PyTorch Distributed Training Libraries: What are the current options?
Check out Horovod - https://github.com/horovod/horovod
-
[D] GPU buying recommendation
If you just want to run tensorflow or pytorch for a Jupyter notebook, setting the environment shouldn't be difficult. I know that AWS has a marketplace of preconfigured images. However, you can go as advanced as setting up a cluster of gpu-equipped nodes to setup Horovod (https://github.com/horovod/horovod) to do distributed machine learning. Yes, there's a learning curve, but you cannot acquire this skillet any other way.
-
SKLean, TensorFlow, etc vs Spark ML?
I'm the maintainer for an open source project called Horovod that allows you to distribute deep learning training (e.g., TensorFlow) on platforms like Spark.
-
Cluster machine learning
You'll want to use horovod to run keras in a distributed system. Then use Slurm to manage the cluster and run the job.
What are some alternatives?
DeepSpeed - DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
petastorm - Petastorm library enables single machine or distributed training and evaluation of deep learning models from datasets in Apache Parquet format. It supports ML frameworks such as Tensorflow, Pytorch, and PySpark and can be used from pure Python code.
bitsandbytes - Accessible large language models via k-bit quantization for PyTorch.
DeepDanbooru - AI based multi-label girl image classification system, implemented by using TensorFlow.
FlexGen - Running large language models like OPT-175B/GPT-3 on a single GPU. Focusing on high-throughput generation. [Moved to: https://github.com/FMInference/FlexGen]
mpi4jax - Zero-copy MPI communication of JAX arrays, for turbo-charged HPC applications in Python :zap:
ChatGLM-6B - ChatGLM-6B: An Open Bilingual Dialogue Language Model | 开源双语对话语言模型
NudeNet - Neural Nets for Nudity Detection and Censoring
unsloth - Finetune Llama 3, Mistral & Gemma LLMs 2-5x faster with 80% less memory
onepanel - The open source, end-to-end computer vision platform. Label, build, train, tune, deploy and automate in a unified platform that runs on any cloud and on-premises.
stable-diffusion-webui - Stable Diffusion web UI
thinc - 🔮 A refreshing functional take on deep learning, compatible with your favorite libraries