PyTorch extensions for high performance and large scale training.
I created a feature request on the FairScale project so that we can track the progress on the integration: Support ZeRO-Offload · Issue #337 · facebookresearch/fairscale (github.com)
The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
I also asked for the respective support in PytorchLightning in this issue: Add deepspeed support · Issue #817 · PyTorchLightning/pytorch-lightning (github.com)
Static code analysis for 29 languages.. Your projects are multi-language. So is SonarQube analysis. Find Bugs, Vulnerabilities, Security Hotspots, and Code Smells so you can release quality code every time. Get started analyzing your projects today for free.
[D] Colab TPU low performance
2 projects | reddit.com/r/MachineLearning | 18 Nov 2021
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
2 projects | reddit.com/r/MachineLearning | 10 Nov 2021
2 projects | reddit.com/r/pytorch | 24 Apr 2021
DDP with model parallelism with multi host multi GPU system
1 project | reddit.com/r/pytorch | 7 Feb 2021
PyTorch Lightning Flash appears to be copying fastai (without any credit) [D]
2 projects | reddit.com/r/MachineLearning | 5 Feb 2021