bolt
composer
Our great sponsors
bolt | composer | |
---|---|---|
6 | 19 | |
2,463 | 4,991 | |
- | 3.0% | |
0.0 | 9.8 | |
over 1 year ago | 3 days ago | |
C++ | Python | |
Mozilla Public License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bolt
-
Show HN: Want something better than k-means? Try BanditPAM
> frown on that sort of dataset
That example was definitely contrived and designed to strongly illustrate the point. I'll counter slightly that non-peaky topologies aren't uncommon, but they're unlikely to look anything that would push KMedoids to a pathological state rather than just a slightly worse state ("worse" assuming that KMeans is the right choice for a given problem).
> worth pointing out .. data reference
Totally agreed. I hope my answer didn't come across as too negative. It's good work, and everyone else was talking about the positives, so I just didn't want to waste too much time echoing again that while getting the other points across.
> bolt reference
https://github.com/dblalock/bolt
They say as much in their paper, but they aren't the first vector quantization library by any stretch. Their contributions are, roughly:
1. If you're careful selecting the right binning strategy then you can cancel out a meaningful amount of discretization error.
2. If you do that, you can afford to choose parameters that fit everything nicely into AVX2 machine words, turning 100s of branching instructions into 1-4 instructions.
3. Doing some real-world tests to show that (1-2) matter.
Last I checked their code wasn't very effective for the places I wanted to apply it, but the paper is pretty solid. I'd replace it with a faster KMeans approximation less likely to crash on big data (maybe even initializing with KMedoids :) ), and if the thing you're quantizing is trainable with some sort of gradient update step then you should do a few optimization passes in the discretized form as well.
- Bolt: Faster matrix and vector operations that run on compressed data
- 10x faster matrix and vector operations
-
[R] Multiplying Matrices Without Multiplying
Code: https://github.com/dblalock/bolt
composer
- Composer – A PyTorch Library for Efficient Neural Network Training
- Train neural networks up to 7x faster
-
How to Train Large Models on Many GPUs?
Mosaic's open source library is excellent: Composer https://github.com/mosaicml/composer.
* It gives you PyTorch DDP for free. Makes FSDP about as easy as can be, and provides best in class performance monitoring tools. https://docs.mosaicml.com/en/v0.12.1/notes/distributed_train...
Here's a nice intro to using Huggingface models: https://docs.mosaicml.com/en/v0.12.1/examples/finetune_huggi...
I'm just a huge fan of their developer experience. It's up there with Transformers and Datasets as the nicest tools to use.
-
[D] Am I stupid for avoiding high level frameworks?
You may consider using Composer Composer by MosaicML.
-
[P] Farewell, CUDA OOM: Automatic Gradient Accumulation
Which is why I'm excited to announce that we (MosaicML) just released an automatic way to avoid these errors. Namely, we just added automatic gradient accumulation to Composer, our open source library for faster + easier neural net training.
-
I highly and genuinely recommend Fast.ai course to beginners
I would love to know your thoughts on PyTorch Lightning vs. other, even more lightweight libraries, if you have the time. PL strikes me as being less idiosyncratic than FastAI, but I'm still not sure whether it would be better in engineering work to go even more lightweight (when I'm not just writing the code myself) -- something that offers up just optimizations and a trainer, a la MosaicML's [Composer](https://github.com/mosaicml/composer) or Chris Hughes's [pytorch-accelerated](https://github.com/Chris-hughes10/pytorch-accelerated) .
-
10x faster matrix and vector operations
This master's thesis sort of does it, but it doesn't have any fine-tuning yet so it completely wrecks the accuracy: https://github.com/joennlae/halutmatmul.
If someone worked on contributing this to Composer [1] I'd be down to help out. I can't justify building it all on my own right now since we're 100% focused on training speedup, but I could definitely meet and talk through it, help code tricky parts, review PRs, etc.
[1] https://github.com/mosaicml/composer
-
[D] Is anyone working on interesting ML libraries and looking for contributors?
We're always looking for contributors for Composer. tl;dr it speeds up neural net training by a lot (e.g., 7x faster ResNet-50).
-
[R] Blazingly Fast Computer Vision Training with the Mosaic ResNet and Composer
Looking at this: https://github.com/mosaicml/composer
- [D] Where do we currently stand at in lottery ticket hypothesis research?
What are some alternatives?
halutmatmul - Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator
pytorch-lightning - Build high-performance AI models with PyTorch Lightning (organized PyTorch). Deploy models with Lightning Apps (organized Python to build end-to-end ML systems). [Moved to: https://github.com/Lightning-AI/lightning]
draco - Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.
pytorch-lightning - Pretrain, finetune and deploy AI models on multiple GPUs, TPUs with zero code changes.
PGM-index - 🏅State-of-the-art learned data structure that enables fast lookup, predecessor, range searches and updates in arrays of billions of items using orders of magnitude less space than traditional indexes
ffcv - FFCV: Fast Forward Computer Vision (and other ML workloads!)
LightGBM - A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
heavydb - HeavyDB (formerly OmniSciDB)
cifar10-fast
Snappy - A fast compressor/decompressor
pytorch-tutorial - PyTorch Tutorial for Deep Learning Researchers