halutmatmul
bolt
halutmatmul | bolt | |
---|---|---|
3 | 6 | |
203 | 2,463 | |
- | - | |
9.4 | 0.0 | |
5 months ago | over 1 year ago | |
Python | C++ | |
MIT License | Mozilla Public License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
halutmatmul
- Show HN: Stella Nera – Maddness Hardware Accelerator
-
10x faster matrix and vector operations
This master's thesis sort of does it, but it doesn't have any fine-tuning yet so it completely wrecks the accuracy: https://github.com/joennlae/halutmatmul.
If someone worked on contributing this to Composer [1] I'd be down to help out. I can't justify building it all on my own right now since we're 100% focused on training speedup, but I could definitely meet and talk through it, help code tricky parts, review PRs, etc.
[1] https://github.com/mosaicml/composer
bolt
-
Show HN: Want something better than k-means? Try BanditPAM
> frown on that sort of dataset
That example was definitely contrived and designed to strongly illustrate the point. I'll counter slightly that non-peaky topologies aren't uncommon, but they're unlikely to look anything that would push KMedoids to a pathological state rather than just a slightly worse state ("worse" assuming that KMeans is the right choice for a given problem).
> worth pointing out .. data reference
Totally agreed. I hope my answer didn't come across as too negative. It's good work, and everyone else was talking about the positives, so I just didn't want to waste too much time echoing again that while getting the other points across.
> bolt reference
https://github.com/dblalock/bolt
They say as much in their paper, but they aren't the first vector quantization library by any stretch. Their contributions are, roughly:
1. If you're careful selecting the right binning strategy then you can cancel out a meaningful amount of discretization error.
2. If you do that, you can afford to choose parameters that fit everything nicely into AVX2 machine words, turning 100s of branching instructions into 1-4 instructions.
3. Doing some real-world tests to show that (1-2) matter.
Last I checked their code wasn't very effective for the places I wanted to apply it, but the paper is pretty solid. I'd replace it with a faster KMeans approximation less likely to crash on big data (maybe even initializing with KMedoids :) ), and if the thing you're quantizing is trainable with some sort of gradient update step then you should do a few optimization passes in the discretized form as well.
- Bolt: Faster matrix and vector operations that run on compressed data
- 10x faster matrix and vector operations
-
[R] Multiplying Matrices Without Multiplying
Code: https://github.com/dblalock/bolt
What are some alternatives?
QualityScaler - QualityScaler - image/video deeplearning upscaling for any GPU
composer - Supercharge Your Model Training
kernel_tuner - Kernel Tuner
draco - Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.
3d-ken-burns - an implementation of 3D Ken Burns Effect from a Single Image using PyTorch
PGM-index - 🏅State-of-the-art learned data structure that enables fast lookup, predecessor, range searches and updates in arrays of billions of items using orders of magnitude less space than traditional indexes
LightGBM - A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.
PyTorch-Guide - PyTorch Guide
heavydb - HeavyDB (formerly OmniSciDB)