fairscale
PyTorch extensions for high performance and large scale training. (by facebookresearch)
pytorch-lightning
Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes. (by Lightning-AI)
fairscale | pytorch-lightning | |
---|---|---|
6 | 9 | |
2,907 | 26,952 | |
2.4% | 1.3% | |
4.5 | 9.9 | |
5 days ago | 4 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fairscale
Posts with mentions or reviews of fairscale.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-11-27.
-
[R] TorchScale: Transformers at Scale - Microsoft 2022 Shuming Ma et al - Improves modeling generality and capability, as well as training stability and efficiency.
I skimmed through the README and paper. What does this library have that that hasn't been included in xformers or fairscale?
-
[D] DeepSpeed vs PyTorch native API
Things are slowly moving into PyTorch upstream such as the ZeRO redundancy optimizer but from my experience the team behind DeepSpeed just move faster. There is also fairscale from the FAIR team which seems to be a staging ground for experimental optimizations before they move into PyTorch. If you use Lightning, it's easy enough to try out these various libraries (docs here)
-
How to Train Large Models on Many GPUs?
DeepSpeed [1] is amazing tool to enable the different kind of parallelisms and optimizations on your model. I would definitely not recommend reimplementing everything yourself.
Probably FairScale [2] too, but never tried it myself.
[1]: https://github.com/microsoft/DeepSpeed
[2]: https://github.com/facebookresearch/fairscale
-
[P] PyTorch Lightning Multi-GPU Training Visualization using minGPT, from 250 Million to 4+ Billion Parameters
It was helpful for me to see how DeepSpeed/FairScale stack up compared to vanilla PyTorch Distributed Training specifically when trying to reach larger parameter sizes, visualizing the trade off with throughput. A lot of the learnings ended up in the Lightning Documentation under the advanced GPU docs!
-
[D] Training 10x Larger Models and Accelerating Training with ZeRO-Offloading
I created a feature request on the FairScale project so that we can track the progress on the integration: Support ZeRO-Offload · Issue #337 · facebookresearch/fairscale (github.com)
pytorch-lightning
Posts with mentions or reviews of pytorch-lightning.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2024-04-29.
-
SB-1047 will stifle open-source AI and decrease safety
It's very easy to get started, right in your Terminal, no fees! No credit card at all.
And there are cloud providers like https://replicate.com/ and https://lightning.ai/ that will let you use your LLM via an API key just like you did with OpenAI if you need that.
You don't need OpenAI - nobody does.
- Lightning AI Studios – A persistent GPU cloud environment
-
Como empezar con inteligencia artificial?
https://see.stanford.edu/Course/CS229 https://lightning.ai/ https://www.youtube.com/watch?v=00s9ireCnCw&t=57s https://towardsdatascience.com/
-
Best practice for saving logits/activation values of model in PyTorch Lightning
I've been wondering on what is the recommended method of saving logits/activations using PyTorch Lightning. I've looked at Callbacks, Loggers and ModelHooks but none of the use-cases seem to be for this kind of activity (even if I were to create my own custom variants of each utility). The ModelCheckpoint Callback in its utility makes me feel like custom Callbacks would be the way to go but I'm not quite sure. This closed GitHub issue does address my issue to some extent.
- New to ML, which is easier to learn - Tensorflow or PyTorch?
- PyTorch Lightning – DL framework to train, deploy, and ship AI fast
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
-
An elegant and strong PyTorch Trainer
For lightweight use, pytorch-lightning is too heavy, and its source code will be very difficult for beginners to read, at least for me.
-
[D] Mixed Precision Training: Difference between BF16 and FP16
For the A100 GPU, theoretical performance is the same for FP16/BF16 and both rely on the same number of bits, meaning memory should be the same. However since it's quite newly added to PyTorch, performance seems to still be dependent on underlying operators used (pytorch lightning debugging in progress here).