composer
pytorch-accelerated
composer | pytorch-accelerated | |
---|---|---|
19 | 1 | |
5,002 | 159 | |
1.8% | - | |
9.8 | 3.7 | |
1 day ago | 3 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
composer
- Composer – A PyTorch Library for Efficient Neural Network Training
- Train neural networks up to 7x faster
-
How to Train Large Models on Many GPUs?
Mosaic's open source library is excellent: Composer https://github.com/mosaicml/composer.
* It gives you PyTorch DDP for free. Makes FSDP about as easy as can be, and provides best in class performance monitoring tools. https://docs.mosaicml.com/en/v0.12.1/notes/distributed_train...
Here's a nice intro to using Huggingface models: https://docs.mosaicml.com/en/v0.12.1/examples/finetune_huggi...
I'm just a huge fan of their developer experience. It's up there with Transformers and Datasets as the nicest tools to use.
-
[D] Am I stupid for avoiding high level frameworks?
You may consider using Composer Composer by MosaicML.
-
[P] Farewell, CUDA OOM: Automatic Gradient Accumulation
Which is why I'm excited to announce that we (MosaicML) just released an automatic way to avoid these errors. Namely, we just added automatic gradient accumulation to Composer, our open source library for faster + easier neural net training.
-
I highly and genuinely recommend Fast.ai course to beginners
I would love to know your thoughts on PyTorch Lightning vs. other, even more lightweight libraries, if you have the time. PL strikes me as being less idiosyncratic than FastAI, but I'm still not sure whether it would be better in engineering work to go even more lightweight (when I'm not just writing the code myself) -- something that offers up just optimizations and a trainer, a la MosaicML's [Composer](https://github.com/mosaicml/composer) or Chris Hughes's [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) .
-
10x faster matrix and vector operations
This master's thesis sort of does it, but it doesn't have any fine-tuning yet so it completely wrecks the accuracy: https://github.com/joennlae/halutmatmul.
If someone worked on contributing this to Composer [1] I'd be down to help out. I can't justify building it all on my own right now since we're 100% focused on training speedup, but I could definitely meet and talk through it, help code tricky parts, review PRs, etc.
[1] https://github.com/mosaicml/composer
-
[D] Is anyone working on interesting ML libraries and looking for contributors?
We're always looking for contributors for Composer. tl;dr it speeds up neural net training by a lot (e.g., 7x faster ResNet-50).
-
[R] Blazingly Fast Computer Vision Training with the Mosaic ResNet and Composer
Looking at this: https://github.com/mosaicml/composer
- [D] Where do we currently stand at in lottery ticket hypothesis research?
pytorch-accelerated
-
I highly and genuinely recommend Fast.ai course to beginners
I would love to know your thoughts on PyTorch Lightning vs. other, even more lightweight libraries, if you have the time. PL strikes me as being less idiosyncratic than FastAI, but I'm still not sure whether it would be better in engineering work to go even more lightweight (when I'm not just writing the code myself) -- something that offers up just optimizations and a trainer, a la MosaicML's [Composer](https://github.com/mosaicml/composer) or Chris Hughes's [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) .
What are some alternatives?
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
PPO-PyTorch - Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch
ffcv - FFCV: Fast Forward Computer Vision (and other ML workloads!)
avalanche - Avalanche: an End-to-End Library for Continual Learning based on PyTorch.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
nos - Module to Automatically maximize the utilization of GPU resources in a Kubernetes cluster through real-time dynamic partitioning and elastic quotas - Effortless optimization at its finest!
cifar10-fast
Activeloop Hub - Data Lake for Deep Learning. Build, manage, query, version, & visualize datasets. Stream data real-time to PyTorch/TensorFlow. https://activeloop.ai [Moved to: https://github.com/activeloopai/deeplake]
Machine-Learning-Collection - A resource for learning about Machine learning & Deep Learning