The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
We've noticed GPU 0 on our 3 GPU system is sometimes idle (which would explain performance differences). However its unclear to us why that may be. Similar to this issue
Fast numerical array expression evaluator for Python, NumPy, PyTables, pandas, bcolz and more
Are you doing any costly chained NumPy operations in your preprocessing? E.g. max(abs(large_ary)), this produces multiple copies of your data, https://github.com/pydata/numexpr can greatly reduce time spent with such operations
Deliver Cleaner and Safer Code - Right in Your IDE of Choice!. SonarLint is a free and open source IDE extension that identifies and catches bugs and vulnerabilities as you code, directly in the IDE. Install from your favorite IDE marketplace today.
[D] Colab TPU low performance
2 projects | reddit.com/r/MachineLearning | 18 Nov 2021
2 projects | reddit.com/r/pytorch | 24 Apr 2021
DDP with model parallelism with multi host multi GPU system
1 project | reddit.com/r/pytorch | 7 Feb 2021
PyTorch Lightning Flash appears to be copying fastai (without any credit) [D]
2 projects | reddit.com/r/MachineLearning | 5 Feb 2021
[D] Training 10x Larger Models and Accelerating Training with ZeRO-Offloading
3 projects | reddit.com/r/MachineLearning | 25 Jan 2021