The lightweight PyTorch wrapper for high-performance AI research. Scale your models, not the boilerplate.
According to the README it's patent pending, but I learned about that from this HN thread. Funny thing is I didn't even remember there was a snafu about patents, but looked it up because of some vague recollection of the PL founder getting into a tussle about some other trivial topic (apparently it was how well PyTorch works on TPUs).
Examples and scripts using Blocks
Almost every high level api is like that. I used blocks (based of theano) 6 years ago, same 3 lines of code. https://github.com/mila-iqia/blocks-examples
OPS - Build and Run Open Source Unikernels. Quickly and easily build and deploy open source unikernels in tens of seconds. Deploy in any language to any cloud.
[D] Colab TPU low performance
2 projects | reddit.com/r/MachineLearning | 18 Nov 2021
[D] How to avoid CPU bottlenecking in PyTorch - training slowed by augmentations and data loading?
2 projects | reddit.com/r/MachineLearning | 10 Nov 2021
2 projects | reddit.com/r/pytorch | 24 Apr 2021
DDP with model parallelism with multi host multi GPU system
1 project | reddit.com/r/pytorch | 7 Feb 2021
[D] Training 10x Larger Models and Accelerating Training with ZeRO-Offloading
3 projects | reddit.com/r/MachineLearning | 25 Jan 2021