mljar-supervised
LightGBM
Our great sponsors
mljar-supervised | LightGBM | |
---|---|---|
51 | 11 | |
2,929 | 16,043 | |
1.2% | 1.0% | |
8.5 | 9.2 | |
11 days ago | 5 days ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
mljar-supervised
-
Show HN: Web App with GUI for AutoML on Tabular Data
Web App is using two open-source packages that I've created:
- MLJAR AutoML - Python package for AutoML on tabular data https://github.com/mljar/mljar-supervised
- Mercury - framework for converting Jupyter Notebooks into Web App https://github.com/mljar/mercury
You can run Web App locally. What is more, you can adjust notebook's code for your needs. For example, you can set different validation strategies or evalutaion metrics or longer training times. The notebooks in the repo are good starting point for you to develop more advanced apps.
-
Fairness in machine learning
It's an Automated Machine Learning python package. It's open-source, you can see how it works on GitHub: https://github.com/mljar/mljar-supervised
-
[P] Build data web apps in Jupyter Notebook with Python only
Sure, at the bottom of our website you can subscribe for newsletter.
- Show HN: AutoML Python Package for Tabular Data with Automatic Documentation
-
library / framework to test multiple sklearn regression models at once
If you need a simple and fast solution, go with auto-sklearn Maybe a bit more complex, but very powerful was mljar-supervised
- Python AutoML on Tabular Data with FeatureEng, HP Tuning, Explanations, AutoDoc
-
Data Science and full-stack-web development
In my case, I had experience in DS and software engineering. It gives me ability to start a company that works on Data Science tools.
-
Learning Python tricks by reading other people's code. But who?
MLJAR AutoML is a Python package for Automated Machine Learning on tabular data with feature engineering, explanations, and automatic documentation.
-
'start with a simple model'
I recommend trying my AutoML package. You can easily check many different algorithms. Waht is more, the baseline algorithms are checked (major class predictor for classification and mean predictor for regression). The advance of AutoML is that it is really quick. You dont need to write preprocessing code, just call fit method.
LightGBM
-
SIRUS.jl: Interpretable Machine Learning via Rule Extraction
SIRUS.jl is a pure Julia implementation of the SIRUS algorithm by Bénard et al. (2021). The algorithm is a rule-based machine learning model meaning that it is fully interpretable. The algorithm does this by firstly fitting a random forests and then converting this forest to rules. Furthermore, the algorithm is stable and achieves a predictive performance that is comparable to LightGBM, a state-of-the-art gradient boosting model created by Microsoft. Interpretability, stability, and predictive performance are described in more detail below.
-
[D] RAM speeds for tabular machine learning algorithms
Hey, thanks everybody for your answers. I've asked around in the XGBoost and LightGBM repos and some folks there also agreed that memory speed will be a bottleneck yes.
-
[P] LightGBM but lighter in another language?
LightBGM seems to have C API support, and C++ example in the main repo
-
Use whatever is best for the problem, but still
LGBM doesn't do RF well, but it's easy to manually bag single LGBM trees.
-
What's New with AWS: Amazon SageMaker built-in algorithms now provides four new Tabular Data Modeling Algorithms
LightGBM is a popular and high-performance open-source implementation of the Gradient Boosting Decision Tree (GBDT). To learn how to use this algorithm, please see example notebooks for Classification and Regression.
-
Search YouTube from the terminal written in python
Microsoft lightGBM. https://github.com/microsoft/LightGBM
-
LightGBM VS CXXGraph - a user suggested alternative
2 projects | 28 Feb 2022
-
Writing the fastest GBDT libary in Rust
Here are our benchmarks on training time comparing Tangram's Gradient Boosted Decision Tree Library to LightGBM, XGBoost, CatBoost, and sklearn.
-
Workstation Management With Nix Flakes: Build a Cmake C++ Package
{ inputs = { nixpkgs = { url = "github:nixos/nixpkgs/nixos-unstable"; }; flake-utils = { url = "github:numtide/flake-utils"; }; }; outputs = { nixpkgs, flake-utils, ... }: flake-utils.lib.eachDefaultSystem (system: let pkgs = import nixpkgs { inherit system; }; lightgbm-cli = (with pkgs; stdenv.mkDerivation { pname = "lightgbm-cli"; version = "3.3.1"; src = fetchgit { url = "https://github.com/microsoft/LightGBM"; rev = "v3.3.1"; sha256 = "pBrsey0RpxxvlwSKrOJEBQp7Hd9Yzr5w5OdUuyFpgF8="; fetchSubmodules = true; }; nativeBuildInputs = [ clang cmake ]; buildPhase = "make -j $NIX_BUILD_CORES"; installPhase = '' mkdir -p $out/bin mv $TMP/LightGBM/lightgbm $out/bin ''; } ); in rec { defaultApp = flake-utils.lib.mkApp { drv = defaultPackage; }; defaultPackage = lightgbm-cli; devShell = pkgs.mkShell { buildInputs = with pkgs; [ lightgbm-cli ]; }; } ); }
-
Is it possible to clean memory after using a package that has a memory leak in my python script?
I'm working on the AutoML python package (Github repo). In my package, I'm using many different algorithms. One of the algorithms is LightGBM. The algorithm after the training doesn't release the memory, even if del is called and gc.collect() after. I created the issue on LightGBM GitHub -> link. Because of this leak, memory consumption is growing very fast during algorithm training.
What are some alternatives?
optuna - A hyperparameter optimization framework
tensorflow - An Open Source Machine Learning Framework for Everyone
autokeras - AutoML library for deep learning
H2O - H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
PySR - High-Performance Symbolic Regression in Python and Julia
GPBoost - Combining tree-boosting with Gaussian process and mixed effects models
AutoViz - Automatically Visualize any dataset, any size with a single line of code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.
yggdrasil-decision-forests - A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
mljar-examples - Examples how MLJAR can be used
amazon-sagemaker-examples - Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
Auto_ViML - Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.
xgboost - Scalable, Portable and Distributed Gradient Boosting (GBDT, GBRT or GBM) Library, for Python, R, Java, Scala, C++ and more. Runs on single machine, Hadoop, Spark, Dask, Flink and DataFlow