Sonar helps you commit clean code every time. With over 225 unique rules to find Python bugs, code smells & vulnerabilities, Sonar finds the issues while you focus on the work. Learn more →
Fairscale Alternatives
Similar projects and alternatives to fairscale
-
DeepSpeed
DeepSpeed is a deep learning optimization library that makes distributed training and inference easy, efficient, and effective.
-
Megatron-DeepSpeed
Ongoing research training transformer language models at scale, including: BERT & GPT-2
-
Revelo Payroll
Free Global Payroll designed for tech teams. Building a great tech team takes more than a paycheck. Zero payroll costs, get AI-driven insights to retain best talent, and delight them with amazing local benefits. 100% free and compliant.
-
pytorch-lightning
Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
-
-
gpt-neox
An implementation of model parallel autoregressive transformers on GPUs, based on the DeepSpeed library.
-
-
xformers
Hackable and optimized Transformers building blocks, supporting a composable construction.
-
Onboard AI
Learn any GitHub repo in 59 seconds. Onboard AI learns any GitHub repo in minutes and lets you chat with it to locate functionality, understand different parts, and generate new code. Use it for free at www.getonboard.dev.
fairscale reviews and mentions
-
[R] TorchScale: Transformers at Scale - Microsoft 2022 Shuming Ma et al - Improves modeling generality and capability, as well as training stability and efficiency.
I skimmed through the README and paper. What does this library have that that hasn't been included in xformers or fairscale?
-
[D] DeepSpeed vs PyTorch native API
Things are slowly moving into PyTorch upstream such as the ZeRO redundancy optimizer but from my experience the team behind DeepSpeed just move faster. There is also fairscale from the FAIR team which seems to be a staging ground for experimental optimizations before they move into PyTorch. If you use Lightning, it's easy enough to try out these various libraries (docs here)
-
How to Train Large Models on Many GPUs?
DeepSpeed [1] is amazing tool to enable the different kind of parallelisms and optimizations on your model. I would definitely not recommend reimplementing everything yourself.
Probably FairScale [2] too, but never tried it myself.
-
[P] PyTorch Lightning Multi-GPU Training Visualization using minGPT, from 250 Million to 4+ Billion Parameters
It was helpful for me to see how DeepSpeed/FairScale stack up compared to vanilla PyTorch Distributed Training specifically when trying to reach larger parameter sizes, visualizing the trade off with throughput. A lot of the learnings ended up in the Lightning Documentation under the advanced GPU docs!
-
[D] Training 10x Larger Models and Accelerating Training with ZeRO-Offloading
Facebook's FAIR has this Optimizer state sharding (ZeRO) scaled & optimized by AdaScaleSGD https://github.com/facebookresearch/fairscale#optimizer-state-sharding-zero
I created a feature request on the FairScale project so that we can track the progress on the integration: Support ZeRO-Offload · Issue #337 · facebookresearch/fairscale (github.com)
-
A note from our sponsor - Sonar
www.sonarsource.com | 4 Oct 2023
Stats
facebookresearch/fairscale is an open source project licensed under GNU General Public License v3.0 or later which is an OSI approved license.
The primary programming language of fairscale is Python.