DeepSpeed
fairscale
Our great sponsors
DeepSpeed | fairscale | |
---|---|---|
42 | 6 | |
25,390 | 2,333 | |
7.9% | 3.4% | |
9.6 | 4.9 | |
4 days ago | 18 days ago | |
Python | Python | |
Apache License 2.0 | GNU General Public License v3.0 or later |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
DeepSpeed
-
April 2023
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
-
Using --deepspeed requires lots of manual tweaking
Filed a discussion item on the deepspeed project: https://github.com/microsoft/DeepSpeed/discussions/3531
Solution: I don't know; this is where I am stuck. https://github.com/microsoft/DeepSpeed/issues/1037 suggests that I just need to 'apt install libaio-dev', but I've done that and it doesn't help.
-
Whether the ML computation engineering expertise will be valuable, is the question.
There could be some spectrum of this expertise. For instance, https://github.com/NVIDIA/FasterTransformer, https://github.com/microsoft/DeepSpeed
- FLiPN-FLaNK Stack Weekly for 17 April 2023
- DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
- DeepSpeed-Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-Like Models
-
12-Apr-2023 AI Summary
DeepSpeed Chat: Easy, Fast and Affordable RLHF Training of ChatGPT-like Models at All Scales (https://github.com/microsoft/DeepSpeed/tree/master/blogs/deepspeed-chat)
- Microsoft DeepSpeed
-
Apple: Transformer architecture optimized for Apple Silicon
I'm following this closely, together with other efforts like GPTQ Quantization and Microsoft's DeepSpeed, all of which are bringing down the hardware requirements of these advanced AI models.
fairscale
-
[R] TorchScale: Transformers at Scale - Microsoft 2022 Shuming Ma et al - Improves modeling generality and capability, as well as training stability and efficiency.
I skimmed through the README and paper. What does this library have that that hasn't been included in xformers or fairscale?
-
[D] DeepSpeed vs PyTorch native API
Things are slowly moving into PyTorch upstream such as the ZeRO redundancy optimizer but from my experience the team behind DeepSpeed just move faster. There is also fairscale from the FAIR team which seems to be a staging ground for experimental optimizations before they move into PyTorch. If you use Lightning, it's easy enough to try out these various libraries (docs here)
-
How to Train Large Models on Many GPUs?
DeepSpeed [1] is amazing tool to enable the different kind of parallelisms and optimizations on your model. I would definitely not recommend reimplementing everything yourself.
Probably FairScale [2] too, but never tried it myself.
-
[P] PyTorch Lightning Multi-GPU Training Visualization using minGPT, from 250 Million to 4+ Billion Parameters
It was helpful for me to see how DeepSpeed/FairScale stack up compared to vanilla PyTorch Distributed Training specifically when trying to reach larger parameter sizes, visualizing the trade off with throughput. A lot of the learnings ended up in the Lightning Documentation under the advanced GPU docs!
-
[D] Training 10x Larger Models and Accelerating Training with ZeRO-Offloading
Facebook's FAIR has this Optimizer state sharding (ZeRO) scaled & optimized by AdaScaleSGD https://github.com/facebookresearch/fairscale#optimizer-state-sharding-zero
I created a feature request on the FairScale project so that we can track the progress on the integration: Support ZeRO-Offload · Issue #337 · facebookresearch/fairscale (github.com)
What are some alternatives?
ColossalAI - Making large AI models cheaper, faster and more accessible
TensorRT - NVIDIA® TensorRT™, an SDK for high-performance deep learning inference, includes a deep learning inference optimizer and runtime that delivers low latency and high throughput for inference applications.
Megatron-LM - Ongoing research training transformer models at scale
fairseq - Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
mesh-transformer-jax - Model parallel transformers in JAX and Haiku
llama - Inference code for LLaMA models
gpt-neox - An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
server - The Triton Inference Server provides an optimized cloud and edge inferencing solution.
Megatron-DeepSpeed - Ongoing research training transformer language models at scale, including: BERT & GPT-2
text-generation-webui - A gradio web UI for running Large Language Models like LLaMA, llama.cpp, GPT-J, Pythia, OPT, and GALACTICA.
Finetune_LLMs - Repo for fine-tuning GPTJ and other GPT models
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration