ffcv
open_lth
ffcv | open_lth | |
---|---|---|
8 | 2 | |
2,747 | 618 | |
0.8% | 0.0% | |
3.5 | 0.0 | |
13 days ago | over 1 year ago | |
Python | Python | |
Apache License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
ffcv
- Question: TIFF image dataset - size in RAM.
-
[P] Composer: a new PyTorch library to train models ~2-4x faster with better algorithms
PyTorch Lightning is also very slow compared to Composer. You don't have to believe us: our friends who wrote the FFCV library benchmarked us against PTL (see the lower left plot in the first cluster of graphs) , and you can see the difference for yourself. For the same accuracy, the FFCV folks found that Composer is about 5x faster than PTL on ResNet-50 on ImageNet.
- FFCV: Fast Forward Computer Vision
-
Does anyone know where I can find research papers for preprocessing large image datasets?
maybe something like this? https://github.com/libffcv/ffcv
- Ffcv: Train models at a fraction of the cost with accelerated data loading
- Show HN: FFCV – Accelerated machine learning via fast data loading
-
[P] FFCV: Accelerated Model Training via Fast Data Loading
Hi! You can join the slack directly from the link on the homepage! (ffcv.io)
open_lth
-
[D] Where do we currently stand at in lottery ticket hypothesis research?
Here https://github.com/facebookresearch/open_lth
-
[P] Composer: a new PyTorch library to train models ~2-4x faster with better algorithms
The way I see it, what we're working on is really a completely new layer in the stack: speeding up the algorithm itself by changing the math. We've still taken great pains to make sure everything else in Composer runs as efficiently as it can, but - as long as you're running the same set of mathematical operations in the same order - there isn't much room to distinguish one trainer from another, and I'd guess that there isn't much of a raw speed difference between Composer and PTL in that sense. For that reason, we aren't very focused on inter-trainer speed comparisons - 10% or 20% here or there a rounding error on the 4x or more that you can expect in the long-run by changing the math. (I will say, though, that the engineers at MosaicML are really good at what they do, and Composer is performance tuned - it absolutely wipes the floor with the OpenLTH trainer I tried to write for my PhD, even without the algorithmic speedups.)
What are some alternatives?
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
composer - Supercharge Your Model Training
best-of-ml-python - 🏆 A ranked list of awesome machine learning Python libraries. Updated weekly.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
apex - A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch
array_storage_benchmark - Compare some methods of array storage in Python (numpy)
ffcv-imagenet - Train ImageNet *fast* in 500 lines of code with FFCV
pillow-simd - The friendly PIL fork