composer
pytorch-lightning
composer | pytorch-lightning | |
---|---|---|
19 | 19 | |
4,991 | 19,188 | |
1.6% | - | |
9.8 | 9.9 | |
6 days ago | almost 2 years ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
composer
- Composer – A PyTorch Library for Efficient Neural Network Training
- Train neural networks up to 7x faster
-
How to Train Large Models on Many GPUs?
Mosaic's open source library is excellent: Composer https://github.com/mosaicml/composer.
* It gives you PyTorch DDP for free. Makes FSDP about as easy as can be, and provides best in class performance monitoring tools. https://docs.mosaicml.com/en/v0.12.1/notes/distributed_train...
Here's a nice intro to using Huggingface models: https://docs.mosaicml.com/en/v0.12.1/examples/finetune_huggi...
I'm just a huge fan of their developer experience. It's up there with Transformers and Datasets as the nicest tools to use.
-
[D] Am I stupid for avoiding high level frameworks?
You may consider using Composer Composer by MosaicML.
-
[P] Farewell, CUDA OOM: Automatic Gradient Accumulation
Which is why I'm excited to announce that we (MosaicML) just released an automatic way to avoid these errors. Namely, we just added automatic gradient accumulation to Composer, our open source library for faster + easier neural net training.
-
I highly and genuinely recommend Fast.ai course to beginners
I would love to know your thoughts on PyTorch Lightning vs. other, even more lightweight libraries, if you have the time. PL strikes me as being less idiosyncratic than FastAI, but I'm still not sure whether it would be better in engineering work to go even more lightweight (when I'm not just writing the code myself) -- something that offers up just optimizations and a trainer, a la MosaicML's [Composer](https://github.com/mosaicml/composer) or Chris Hughes's [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) .
-
10x faster matrix and vector operations
This master's thesis sort of does it, but it doesn't have any fine-tuning yet so it completely wrecks the accuracy: https://github.com/joennlae/halutmatmul.
If someone worked on contributing this to Composer [1] I'd be down to help out. I can't justify building it all on my own right now since we're 100% focused on training speedup, but I could definitely meet and talk through it, help code tricky parts, review PRs, etc.
[1] https://github.com/mosaicml/composer
-
[D] Is anyone working on interesting ML libraries and looking for contributors?
We're always looking for contributors for Composer. tl;dr it speeds up neural net training by a lot (e.g., 7x faster ResNet-50).
-
[R] Blazingly Fast Computer Vision Training with the Mosaic ResNet and Composer
Looking at this: https://github.com/mosaicml/composer
- [D] Where do we currently stand at in lottery ticket hypothesis research?
pytorch-lightning
-
Problem with pytorch lightning and optuna with multiple callbacks
def on_validation_end(self, trainer: Trainer, pl_module: LightningModule) -> None: # Trainer calls `on_validation_end` for sanity check. Therefore, it is necessary to avoid # calling `trial.report` multiple times at epoch 0. For more details, see # https://github.com/PyTorchLightning/pytorch-lightning/issues/1391. if trainer.sanity_checking: return
-
Please comment on my planned research project structure
Under the hood, the ModelWrapper object will create a ML model based on the config (so far, an XGBoost model and a PyTorch Lightning model). Each of those will have a wrapper that conducts training and evaluation (since from my understanding of Lightning, Trainers are required to be outside of the class). In lack of a better name, I call these wrappers Fitters. For uniformity, I thought about adding a common interface IFitter, which is inherited by all model wrappers as outlined below.
-
Watch out for the (PyTorch) Lightning
Join their Slack to ask the community questions and check out the GitHub here.
-
[P] Composer: a new PyTorch library to train models ~2-4x faster with better algorithms
Pytorch lightning benchmarks against pytorch on every PR (benchmarks to make sure that it is mot slower.
-
[D] What Repetitive Tasks Related to Machine Learning do You Hate Doing?
There is already a ton of momentum around automating ML workflows. I would suggest you contribute to a preexisting project like, for instance, PyTorch Lightning or fast.ai.
- PyTorch Lightening
-
[D] Are you using PyTorch or TensorFlow going into 2022?
Is the problem the sheer number of options, or the fact that they are all together in one place? Would it be better if they were organized into the different trainer entrypoints (fit, validate, ...)? If that is the case, there was an RFC proposing this which you might find interesting, feel free to drop by and comment on the issue: https://github.com/PyTorchLightning/pytorch-lightning/issues/10444
-
[D] Colab TPU low performance
I wanted to make a quick performance comparison between the GPU (Tesla K80) and TPU (v2-8) available in Google Colab with PyTorch. To do so quickly, I used an MNIST example from pytorch-lightning that trains a simple CNN.
-
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
We've noticed GPU 0 on our 3 GPU system is sometimes idle (which would explain performance differences). However its unclear to us why that may be. Similar to this issue
-
[P] An introduction to PyKale https://github.com/pykale/pykale, a PyTorch library that provides a unified pipeline-based API for knowledge-aware multimodal learning and transfer learning on graphs, images, texts, and videos to accelerate interdisciplinary research. Welcome feedback/contribution!
If you want a good example for reference, take a look at Pytorch Lightning's readme (https://github.com/PyTorchLightning/pytorch-lightning) It answers the 3 questions of "what is this", "why should I care", and "how do i use it" almost instantly
What are some alternatives?
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
mmdetection - OpenMMLab Detection Toolbox and Benchmark
ffcv - FFCV: Fast Forward Computer Vision (and other ML workloads!)
pytorch-grad-cam - Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
detectron2 - Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
cifar10-fast
fastai - The fastai deep learning library
pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers
sparktorch - Train and run Pytorch models on Apache Spark.
open_lth - A repository in preparation for open-sourcing lottery ticket hypothesis code.
Keras - Deep Learning for humans