bolt
LightGBM
bolt | LightGBM | |
---|---|---|
6 | 11 | |
2,463 | 16,076 | |
- | 0.7% | |
0.0 | 9.1 | |
over 1 year ago | 3 days ago | |
C++ | C++ | |
Mozilla Public License 2.0 | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
bolt
-
Show HN: Want something better than k-means? Try BanditPAM
> frown on that sort of dataset
That example was definitely contrived and designed to strongly illustrate the point. I'll counter slightly that non-peaky topologies aren't uncommon, but they're unlikely to look anything that would push KMedoids to a pathological state rather than just a slightly worse state ("worse" assuming that KMeans is the right choice for a given problem).
> worth pointing out .. data reference
Totally agreed. I hope my answer didn't come across as too negative. It's good work, and everyone else was talking about the positives, so I just didn't want to waste too much time echoing again that while getting the other points across.
> bolt reference
https://github.com/dblalock/bolt
They say as much in their paper, but they aren't the first vector quantization library by any stretch. Their contributions are, roughly:
1. If you're careful selecting the right binning strategy then you can cancel out a meaningful amount of discretization error.
2. If you do that, you can afford to choose parameters that fit everything nicely into AVX2 machine words, turning 100s of branching instructions into 1-4 instructions.
3. Doing some real-world tests to show that (1-2) matter.
Last I checked their code wasn't very effective for the places I wanted to apply it, but the paper is pretty solid. I'd replace it with a faster KMeans approximation less likely to crash on big data (maybe even initializing with KMedoids :) ), and if the thing you're quantizing is trainable with some sort of gradient update step then you should do a few optimization passes in the discretized form as well.
- Bolt: Faster matrix and vector operations that run on compressed data
- 10x faster matrix and vector operations
-
[R] Multiplying Matrices Without Multiplying
Code: https://github.com/dblalock/bolt
LightGBM
-
SIRUS.jl: Interpretable Machine Learning via Rule Extraction
SIRUS.jl is a pure Julia implementation of the SIRUS algorithm by Bénard et al. (2021). The algorithm is a rule-based machine learning model meaning that it is fully interpretable. The algorithm does this by firstly fitting a random forests and then converting this forest to rules. Furthermore, the algorithm is stable and achieves a predictive performance that is comparable to LightGBM, a state-of-the-art gradient boosting model created by Microsoft. Interpretability, stability, and predictive performance are described in more detail below.
-
[D] RAM speeds for tabular machine learning algorithms
Hey, thanks everybody for your answers. I've asked around in the XGBoost and LightGBM repos and some folks there also agreed that memory speed will be a bottleneck yes.
-
[P] LightGBM but lighter in another language?
LightBGM seems to have C API support, and C++ example in the main repo
-
Use whatever is best for the problem, but still
LGBM doesn't do RF well, but it's easy to manually bag single LGBM trees.
-
What's New with AWS: Amazon SageMaker built-in algorithms now provides four new Tabular Data Modeling Algorithms
LightGBM is a popular and high-performance open-source implementation of the Gradient Boosting Decision Tree (GBDT). To learn how to use this algorithm, please see example notebooks for Classification and Regression.
-
Search YouTube from the terminal written in python
Microsoft lightGBM. https://github.com/microsoft/LightGBM
-
LightGBM VS CXXGraph - a user suggested alternative
2 projects | 28 Feb 2022
-
Writing the fastest GBDT libary in Rust
Here are our benchmarks on training time comparing Tangram's Gradient Boosted Decision Tree Library to LightGBM, XGBoost, CatBoost, and sklearn.
-
Workstation Management With Nix Flakes: Build a Cmake C++ Package
{ inputs = { nixpkgs = { url = "github:nixos/nixpkgs/nixos-unstable"; }; flake-utils = { url = "github:numtide/flake-utils"; }; }; outputs = { nixpkgs, flake-utils, ... }: flake-utils.lib.eachDefaultSystem (system: let pkgs = import nixpkgs { inherit system; }; lightgbm-cli = (with pkgs; stdenv.mkDerivation { pname = "lightgbm-cli"; version = "3.3.1"; src = fetchgit { url = "https://github.com/microsoft/LightGBM"; rev = "v3.3.1"; sha256 = "pBrsey0RpxxvlwSKrOJEBQp7Hd9Yzr5w5OdUuyFpgF8="; fetchSubmodules = true; }; nativeBuildInputs = [ clang cmake ]; buildPhase = "make -j $NIX_BUILD_CORES"; installPhase = '' mkdir -p $out/bin mv $TMP/LightGBM/lightgbm $out/bin ''; } ); in rec { defaultApp = flake-utils.lib.mkApp { drv = defaultPackage; }; defaultPackage = lightgbm-cli; devShell = pkgs.mkShell { buildInputs = with pkgs; [ lightgbm-cli ]; }; } ); }
-
Is it possible to clean memory after using a package that has a memory leak in my python script?
I'm working on the AutoML python package (Github repo). In my package, I'm using many different algorithms. One of the algorithms is LightGBM. The algorithm after the training doesn't release the memory, even if del is called and gc.collect() after. I created the issue on LightGBM GitHub -> link. Because of this leak, memory consumption is growing very fast during algorithm training.
What are some alternatives?
composer - Supercharge Your Model Training
tensorflow - An Open Source Machine Learning Framework for Everyone
halutmatmul - Hashed Lookup Table based Matrix Multiplication (halutmatmul) - Stella Nera accelerator
H2O - H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
draco - Draco is a library for compressing and decompressing 3D geometric meshes and point clouds. It is intended to improve the storage and transmission of 3D graphics.
GPBoost - Combining tree-boosting with Gaussian process and mixed effects models
PGM-index - 🏅State-of-the-art learned data structure that enables fast lookup, predecessor, range searches and updates in arrays of billions of items using orders of magnitude less space than traditional indexes
amazon-sagemaker-examples - Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
heavydb - HeavyDB (formerly OmniSciDB)
yggdrasil-decision-forests - A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
Snappy - A fast compressor/decompressor
mljar-supervised - Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation