composer
pytorch-lightning
composer | pytorch-lightning | |
---|---|---|
19 | 9 | |
5,002 | 26,952 | |
1.8% | 1.3% | |
9.8 | 9.9 | |
1 day ago | 2 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
composer
- Composer – A PyTorch Library for Efficient Neural Network Training
- Train neural networks up to 7x faster
-
How to Train Large Models on Many GPUs?
Mosaic's open source library is excellent: Composer https://github.com/mosaicml/composer.
* It gives you PyTorch DDP for free. Makes FSDP about as easy as can be, and provides best in class performance monitoring tools. https://docs.mosaicml.com/en/v0.12.1/notes/distributed_train...
Here's a nice intro to using Huggingface models: https://docs.mosaicml.com/en/v0.12.1/examples/finetune_huggi...
I'm just a huge fan of their developer experience. It's up there with Transformers and Datasets as the nicest tools to use.
-
[D] Am I stupid for avoiding high level frameworks?
You may consider using Composer Composer by MosaicML.
-
[P] Farewell, CUDA OOM: Automatic Gradient Accumulation
Which is why I'm excited to announce that we (MosaicML) just released an automatic way to avoid these errors. Namely, we just added automatic gradient accumulation to Composer, our open source library for faster + easier neural net training.
-
I highly and genuinely recommend Fast.ai course to beginners
I would love to know your thoughts on PyTorch Lightning vs. other, even more lightweight libraries, if you have the time. PL strikes me as being less idiosyncratic than FastAI, but I'm still not sure whether it would be better in engineering work to go even more lightweight (when I'm not just writing the code myself) -- something that offers up just optimizations and a trainer, a la MosaicML's [Composer](https://github.com/mosaicml/composer) or Chris Hughes's [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) .
-
10x faster matrix and vector operations
This master's thesis sort of does it, but it doesn't have any fine-tuning yet so it completely wrecks the accuracy: https://github.com/joennlae/halutmatmul.
If someone worked on contributing this to Composer [1] I'd be down to help out. I can't justify building it all on my own right now since we're 100% focused on training speedup, but I could definitely meet and talk through it, help code tricky parts, review PRs, etc.
[1] https://github.com/mosaicml/composer
-
[D] Is anyone working on interesting ML libraries and looking for contributors?
We're always looking for contributors for Composer. tl;dr it speeds up neural net training by a lot (e.g., 7x faster ResNet-50).
-
[R] Blazingly Fast Computer Vision Training with the Mosaic ResNet and Composer
Looking at this: https://github.com/mosaicml/composer
- [D] Where do we currently stand at in lottery ticket hypothesis research?
pytorch-lightning
-
SB-1047 will stifle open-source AI and decrease safety
It's very easy to get started, right in your Terminal, no fees! No credit card at all.
And there are cloud providers like https://replicate.com/ and https://lightning.ai/ that will let you use your LLM via an API key just like you did with OpenAI if you need that.
You don't need OpenAI - nobody does.
- Lightning AI Studios – A persistent GPU cloud environment
-
Como empezar con inteligencia artificial?
https://see.stanford.edu/Course/CS229 https://lightning.ai/ https://www.youtube.com/watch?v=00s9ireCnCw&t=57s https://towardsdatascience.com/
-
Best practice for saving logits/activation values of model in PyTorch Lightning
I've been wondering on what is the recommended method of saving logits/activations using PyTorch Lightning. I've looked at Callbacks, Loggers and ModelHooks but none of the use-cases seem to be for this kind of activity (even if I were to create my own custom variants of each utility). The ModelCheckpoint Callback in its utility makes me feel like custom Callbacks would be the way to go but I'm not quite sure. This closed GitHub issue does address my issue to some extent.
- New to ML, which is easier to learn - Tensorflow or PyTorch?
- PyTorch Lightning – DL framework to train, deploy, and ship AI fast
-
We just release a complete open-source solution for accelerating Stable Diffusion pretraining and fine-tuning!
Our codebase for the diffusion models builds heavily on OpenAI's ADM codebase , lucidrains, Stable Diffusion, Lightning and Hugging Face. Thanks for open-sourcing!
-
An elegant and strong PyTorch Trainer
For lightweight use, pytorch-lightning is too heavy, and its source code will be very difficult for beginners to read, at least for me.
-
[D] Mixed Precision Training: Difference between BF16 and FP16
For the A100 GPU, theoretical performance is the same for FP16/BF16 and both rely on the same number of bits, meaning memory should be the same. However since it's quite newly added to PyTorch, performance seems to still be dependent on underlying operators used (pytorch lightning debugging in progress here).
What are some alternatives?
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
lnd - Lightning Network Daemon ⚡️
ffcv - FFCV: Fast Forward Computer Vision (and other ML workloads!)
Eclair - A scala implementation of the Lightning Network.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
mmdetection - OpenMMLab Detection Toolbox and Benchmark
cifar10-fast
umbrel - A beautiful home server OS for self-hosting with an app store. Buy a pre-built Umbrel Home with umbrelOS, or install on a Raspberry Pi 4, Pi 5, any Ubuntu/Debian system, or a VPS.
pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers
fastai - The fastai deep learning library
open_lth - A repository in preparation for open-sourcing lottery ticket hypothesis code.
Keras - Deep Learning for humans