lleaves VS ngboost

Compare lleaves vs ngboost and see what are their differences.

lleaves

Compiler for LightGBM gradient-boosted trees, based on LLVM. Speeds up prediction by ≥10x. (by siboehm)
Our great sponsors
  • WorkOS - The modern identity platform for B2B SaaS
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • SaaSHub - Software Alternatives and Reviews
lleaves ngboost
4 1
297 1,582
- 1.5%
6.7 6.7
27 days ago 2 months ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

lleaves

Posts with mentions or reviews of lleaves. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-06-18.
  • LLeaves: A LLVM-based compiler for LightGBM decision trees
    1 project | news.ycombinator.com | 8 Jul 2023
  • Cold Showers
    4 projects | news.ycombinator.com | 18 Jun 2022
    I built this decision tree (LightGBM) compiler last summer: https://github.com/siboehm/lleaves

    It get's you ~10x speedups for batch predictions, more if your model is big. It's not complicated, it ended up being <1K lines of Python code. I heard a couple of stories like yours, where people had multi-node spark clusters running LightGBM, and it always amused me because by if you compiled the trees instead you could get rid of the whole cluster.

  • Tree compiler that speeds up LightGBM model inference by ~30x
    2 projects | /r/dataengineering | 22 Aug 2021
    In a near-future version I'll expose some of the compilation parameters, I was somewhat afraid of having an API that's too complicated deterring people who just want a no-fuzz drop-in replacement for LightGBM. But as long as I keep sane defaults and have the parameters optional it should be fine. Relevant parameters are definitely block size (needs to adjust to L1i size and tree size) as well as the LLVM codemodel (a smaller adress space increases single-batch prediction speeds but doesn't work for large models). The thread-size specific compilation I'm still looking into, it makes the API more complicated and so might not be worth it.

ngboost

Posts with mentions or reviews of ngboost. We have used some of these posts to build our list of alternatives and similar projects.
  • Exploring Estimations and Confidence Intervals of Three Point Shooting
    1 project | /r/NBATalk | 16 Aug 2021
    For DARKO, I do this by passing my time-decay/padding/kalman calculations as features (along with other pieces of biographical information) into NGboost. This gives me a point-estimate prediction, as well as confidence intervals. NGboost is somewhat limited in terms of what distributions it can handle, so you need to use care here, but it's very powerful since it can easily work with a large feature space (DARKO uses hundreds of features).

What are some alternatives?

When comparing lleaves and ngboost you can also consider the following projects:

mljar-supervised - Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation

PaddlePaddle - PArallel Distributed Deep LEarning: Machine Learning Framework from Industrial Practice (『飞桨』核心框架,深度学习&机器学习高性能单机、分布式训练和跨平台部署)

m2cgen - Transform ML models into a native code (Java, C, Python, Go, JavaScript, Visual Basic, C#, R, PowerShell, PHP, Dart, Haskell, Ruby, F#, Rust) with zero dependencies

Keras - Deep Learning for humans

miceforest - Multiple Imputation with LightGBM in Python

natural-posterior-network - Official Implementation of "Natural Posterior Network: Deep Bayesian Predictive Uncertainty for Exponential Family Distributions" (ICLR, 2022)

catboost - A fast, scalable, high performance Gradient Boosting on Decision Trees library, used for ranking, classification, regression and other machine learning tasks for Python, R, Java, C++. Supports computation on CPU and GPU.

data-science-ipython-notebooks - Data science Python notebooks: Deep learning (TensorFlow, Theano, Caffe, Keras), scikit-learn, Kaggle, big data (Spark, Hadoop MapReduce, HDFS), matplotlib, pandas, NumPy, SciPy, Python essentials, AWS, and various command lines.

scikit-learn - scikit-learn: machine learning in Python

spaCy - 💫 Industrial-strength Natural Language Processing (NLP) in Python