composer
hamilton
composer | hamilton | |
---|---|---|
19 | 26 | |
5,002 | 878 | |
1.8% | - | |
9.8 | 8.1 | |
1 day ago | about 1 year ago | |
Python | Python | |
Apache License 2.0 | BSD 3-clause Clear License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
composer
- Composer – A PyTorch Library for Efficient Neural Network Training
- Train neural networks up to 7x faster
-
How to Train Large Models on Many GPUs?
Mosaic's open source library is excellent: Composer https://github.com/mosaicml/composer.
* It gives you PyTorch DDP for free. Makes FSDP about as easy as can be, and provides best in class performance monitoring tools. https://docs.mosaicml.com/en/v0.12.1/notes/distributed_train...
Here's a nice intro to using Huggingface models: https://docs.mosaicml.com/en/v0.12.1/examples/finetune_huggi...
I'm just a huge fan of their developer experience. It's up there with Transformers and Datasets as the nicest tools to use.
-
[D] Am I stupid for avoiding high level frameworks?
You may consider using Composer Composer by MosaicML.
-
[P] Farewell, CUDA OOM: Automatic Gradient Accumulation
Which is why I'm excited to announce that we (MosaicML) just released an automatic way to avoid these errors. Namely, we just added automatic gradient accumulation to Composer, our open source library for faster + easier neural net training.
-
I highly and genuinely recommend Fast.ai course to beginners
I would love to know your thoughts on PyTorch Lightning vs. other, even more lightweight libraries, if you have the time. PL strikes me as being less idiosyncratic than FastAI, but I'm still not sure whether it would be better in engineering work to go even more lightweight (when I'm not just writing the code myself) -- something that offers up just optimizations and a trainer, a la MosaicML's [Composer](https://github.com/mosaicml/composer) or Chris Hughes's [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) .
-
10x faster matrix and vector operations
This master's thesis sort of does it, but it doesn't have any fine-tuning yet so it completely wrecks the accuracy: https://github.com/joennlae/halutmatmul.
If someone worked on contributing this to Composer [1] I'd be down to help out. I can't justify building it all on my own right now since we're 100% focused on training speedup, but I could definitely meet and talk through it, help code tricky parts, review PRs, etc.
[1] https://github.com/mosaicml/composer
-
[D] Is anyone working on interesting ML libraries and looking for contributors?
We're always looking for contributors for Composer. tl;dr it speeds up neural net training by a lot (e.g., 7x faster ResNet-50).
-
[R] Blazingly Fast Computer Vision Training with the Mosaic ResNet and Composer
Looking at this: https://github.com/mosaicml/composer
- [D] Where do we currently stand at in lottery ticket hypothesis research?
hamilton
-
Write production grade pandas (and other libraries!) with Hamilton
And find the repository here: https://github.com/dagworks-inc/hamilton/
-
Useful libraries for data engineering in various programming languages
Python - https://github.com/stitchfix/hamilton (author here). It's great if you want your code to be always unit testable and documentation friendly, and you want to be able to visualize execution. Blog post on using it with Pandas https://link.medium.com/XhyYD9BAntb.
-
Cognitive Loads in Programming
Yes! As one of the creators of https://github.com/stitchfix/hamilton this was one of the aims. Simplifying the cognitive burden for those developing and managing data transforms over the course of years, and in particular for ones they didn't write!
For example in Hamilton -- we force people to write "declarative functions" which then are stitched together to create a dataflow.
E.g. example function -- my guess is that you can read and understand/guess what it does very easily.
-
Prefect vs other things question
For (1) there are quite a few options - prefect is one, metaflow is another, airflow, dagster, even https://github.com/stitchfix/hamilton (core contributor here), etc.
-
Field Lineage
If you're want to do more python https://github.com/stitchfix/hamilton allows you to model dependencies at a columnar (field) level.
- Show HN
-
[D] Is anyone working on interesting ML libraries and looking for contributors?
Take a look at https://github.com/stitchfix/hamilton - we're after contributors who can help us grow the project, e.g. make documentation great, dog fooding features and suggesting/contributing usability improvements.
-
Useful Python decorators for Data Scientists
For a real world example of their power, we built an entire framework (https://github.com/stitchfix/hamilton) at Stitch Fix, where a lot of cool magic is provide via decorators - see https://hamilton-docs.gitbook.io/docs/reference/api-reference/available-decorators and these two source files (https://github.com/stitchfix/hamilton/blob/main/hamilton/function_modifiers_base.py, https://github.com/stitchfix/hamilton/blob/main/hamilton/function_modifiers.py ). Note we do some non-trivial stuff via them.
-
unit tests
For data processing/transform code, I would recommend looking at https://github.com/stitchfix/hamilton, especially if you're trying to test pandas code. Short getting started here - https://towardsdatascience.com/how-to-use-hamilton-with-pandas-in-5-minutes-89f63e5af8f5 (disclaimer: I'm one of the authors).
-
Dealing with hundreds of customer/computed columns
The python package, hamilton, from Stitch Fix (https://hamilton-docs.gitbook.io/docs/) can help manage transformations on pandas dataframes. This DAG of transformations is managed separately in a file - so it can be versioned, in case the transformations change. The memory required is reduced, because only the API call tables and mapping parameter table have to be in memory. The calculated columns can be produced as needed. Just like dbt, transformations are separate from the source tables - but hamilton can be used on any python object - not just dataframes. dbt is SQL based.
What are some alternatives?
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
prosto - Prosto is a data processing toolkit radically changing how data is processed by heavily relying on functions and operations with functions - an alternative to map-reduce and join-groupby
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
versatile-data-kit - One framework to develop, deploy and operate data workflows with Python and SQL.
ffcv - FFCV: Fast Forward Computer Vision (and other ML workloads!)
plumbing - Prismatic's Clojure(Script) utility belt
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
OpenLineage - An Open Standard for lineage metadata collection
cifar10-fast
polars - Dataframes powered by a multithreaded, vectorized query engine, written in Rust
pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers
codetour - VS Code extension that allows you to record and play back guided tours of codebases, directly within the editor.