nvidia-gpu-scheduler
pytorch-lightning
nvidia-gpu-scheduler | pytorch-lightning | |
---|---|---|
1 | 19 | |
7 | 19,188 | |
- | - | |
0.0 | 9.9 | |
over 1 year ago | almost 2 years ago | |
Python | Python | |
MIT License | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
nvidia-gpu-scheduler
-
[D] How to be more productive while doing Deep Learning experiments?
Sure. No, a simple bash script is not enough. In my case, we have several machines shared in the department, some with GPUs, some without. What I have is a python script that gets a list of jobs and then it schedule them in the first available machine (according to memory/CPU/GPU availability). Unfortunately, what I have is really entangled with our computing platform (Docker-based with a shared filesystem) and not really easy to have it as standalone project (that's why I said "know you infrastructure"). The most similar thing that I could find online is this project. I believe there are then some HPC tools that could be useful (e.g. Slurm), but that's way too much for what we need.
pytorch-lightning
-
Problem with pytorch lightning and optuna with multiple callbacks
def on_validation_end(self, trainer: Trainer, pl_module: LightningModule) -> None: # Trainer calls `on_validation_end` for sanity check. Therefore, it is necessary to avoid # calling `trial.report` multiple times at epoch 0. For more details, see # https://github.com/PyTorchLightning/pytorch-lightning/issues/1391. if trainer.sanity_checking: return
-
Please comment on my planned research project structure
Under the hood, the ModelWrapper object will create a ML model based on the config (so far, an XGBoost model and a PyTorch Lightning model). Each of those will have a wrapper that conducts training and evaluation (since from my understanding of Lightning, Trainers are required to be outside of the class). In lack of a better name, I call these wrappers Fitters. For uniformity, I thought about adding a common interface IFitter, which is inherited by all model wrappers as outlined below.
-
Watch out for the (PyTorch) Lightning
Join their Slack to ask the community questions and check out the GitHub here.
-
[P] Composer: a new PyTorch library to train models ~2-4x faster with better algorithms
Pytorch lightning benchmarks against pytorch on every PR (benchmarks to make sure that it is mot slower.
-
[D] What Repetitive Tasks Related to Machine Learning do You Hate Doing?
There is already a ton of momentum around automating ML workflows. I would suggest you contribute to a preexisting project like, for instance, PyTorch Lightning or fast.ai.
- PyTorch Lightening
-
[D] Are you using PyTorch or TensorFlow going into 2022?
Is the problem the sheer number of options, or the fact that they are all together in one place? Would it be better if they were organized into the different trainer entrypoints (fit, validate, ...)? If that is the case, there was an RFC proposing this which you might find interesting, feel free to drop by and comment on the issue: https://github.com/PyTorchLightning/pytorch-lightning/issues/10444
-
[D] Colab TPU low performance
I wanted to make a quick performance comparison between the GPU (Tesla K80) and TPU (v2-8) available in Google Colab with PyTorch. To do so quickly, I used an MNIST example from pytorch-lightning that trains a simple CNN.
-
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
We've noticed GPU 0 on our 3 GPU system is sometimes idle (which would explain performance differences). However its unclear to us why that may be. Similar to this issue
-
[P] An introduction to PyKale https://github.com/pykale/pykale, a PyTorch library that provides a unified pipeline-based API for knowledge-aware multimodal learning and transfer learning on graphs, images, texts, and videos to accelerate interdisciplinary research. Welcome feedback/contribution!
If you want a good example for reference, take a look at Pytorch Lightning's readme (https://github.com/PyTorchLightning/pytorch-lightning) It answers the 3 questions of "what is this", "why should I care", and "how do i use it" almost instantly
What are some alternatives?
detectron2 - Detectron2 is a platform for object detection, segmentation and other visual recognition tasks.
mmdetection - OpenMMLab Detection Toolbox and Benchmark
fastapi-cloud-tasks - GCP's Cloud Tasks + Cloud Scheduler + FastAPI = Partial replacement for celery.
pytorch-grad-cam - Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
stable-diffusion-nvidia-docker - GPU-ready Dockerfile to run Stability.AI stable-diffusion model v2 with a simple web interface. Includes multi-GPUs support.
tmux - tmux source code
fastai - The fastai deep learning library
Sacred - Sacred is a tool to help you configure, organize, log and reproduce experiments developed at IDSIA.
composer - Supercharge Your Model Training
aim - Aim 💫 — An easy-to-use & supercharged open-source experiment tracker.
sparktorch - Train and run Pytorch models on Apache Spark.