composer VS open_lth

Compare composer vs open_lth and see what are their differences.

open_lth

A repository in preparation for open-sourcing lottery ticket hypothesis code. (by facebookresearch)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
composer open_lth
19 2
5,002 618
1.8% 0.0%
9.8 0.0
1 day ago over 1 year ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

composer

Posts with mentions or reviews of composer. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-02-11.

open_lth

Posts with mentions or reviews of open_lth. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-04.
  • [D] Where do we currently stand at in lottery ticket hypothesis research?
    2 projects | /r/MachineLearning | 4 Jun 2022
    Here https://github.com/facebookresearch/open_lth
  • [P] Composer: a new PyTorch library to train models ~2-4x faster with better algorithms
    7 projects | /r/MachineLearning | 16 Mar 2022
    The way I see it, what we're working on is really a completely new layer in the stack: speeding up the algorithm itself by changing the math. We've still taken great pains to make sure everything else in Composer runs as efficiently as it can, but - as long as you're running the same set of mathematical operations in the same order - there isn't much room to distinguish one trainer from another, and I'd guess that there isn't much of a raw speed difference between Composer and PTL in that sense. For that reason, we aren't very focused on inter-trainer speed comparisons - 10% or 20% here or there a rounding error on the 4x or more that you can expect in the long-run by changing the math. (I will say, though, that the engineers at MosaicML are really good at what they do, and Composer is performance tuned - it absolutely wipes the floor with the OpenLTH trainer I tried to write for my PhD, even without the algorithmic speedups.)

What are some alternatives?

When comparing composer and open_lth you can also consider the following projects:

pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]

Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration

pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.

ffcv - FFCV: Fast Forward Computer Vision (and other ML workloads!)

apex - A PyTorch Extension: Tools for easy mixed precision and distributed training in Pytorch

cifar10-fast

pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers

pytorch-accelerated - A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases, and capable of utilizing different hardware options with no code changes required. Docs: https://pytorch-accelerated.readthedocs.io/en/latest/