fast-cma-es
LightGBM
fast-cma-es | LightGBM | |
---|---|---|
12 | 11 | |
106 | 16,126 | |
- | 1.0% | |
7.2 | 9.1 | |
6 months ago | 3 days ago | |
Python | C++ | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
fast-cma-es
-
Optimization problem with complex constrain
essentially the accumulated value of the portfolio after 50 years not clear to me how this can be linear - looks quite "exponential" without knowing the details. Can you exploit the "has to be greater than 0" condition to simplify the constraint into a linear one? "because at each time step there will be a decision" probably means the answer is "no". But don't overestimate the complexity of nonlinear optimizaiton (see for instance https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/CryptoTrading.adoc ), most of the complexity is hidden in the algorithm itself not visible for the user.
-
what methods can be used to solve a TP-BVP with variable control?
What about combining a fast numerical integrator like https://github.com/esa/torchquad or https://github.com/AnyarInc/Ascent with a fast parallel CMA-ES implementation like https://github.com/dietmarwo/fast-cma-es/blob/master/fcmaes/cmaescpp.py ? A numerical integrator allows you to implement variable control and a fast non-derivative optimizer can solve any related optimization problem.
-
Quality Diversity Optimization for Expensive Simulations
A new tutorial how to apply QD-optimization to expensive simulations: https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Diversity.adoc .
-
New Fast Python CVT MAP-Elites + CMA-ES implementation
There is a new implementation of Python CVT MAP-Elites + CMA-ES available. It is presented at https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/MapElites.adoc applying it to ESAs very hard Cassini2 space mission planning optimization benchmark.
-
Performance of Evolutionary Algorithms for Machine Learning
I tried to answer these questions in EvoJax.adoc
-
Optimization for Quantum Computer Simulations
Here https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Quant.adoc is a new tutorial how to apply optimization in the context of simulated quantum algorithms. It is based on https://qiskit.org/textbook/ch-applications/vqe-molecules.html#Example-with-a-Single-Qubit-Variational-Form but provides more reliable methods utilizing parallelism. This makes not much sense (yet) when the backend is a real quantum computer, but most simulators scale bad when using multi-threading or an GPU. So it is better to switch parallelism off for the simulation and utilize the better scaling parallel optmization provides, specially if a modern many-core CPU is available.
-
Transaction and Payment Optimization Problem
https://github.com/dietmarwo/fast-cma-es/blob/master/examples/subset.py implements the problem using parallel continuous optimization collecting different optimal solutions. Not much faster than GLPK_MI, but utilizing modern many-core CPUs when you are looking for a list of alternative solutions. Increase the number of retrys when you want more solutions.
-
A new fast local search heuristic for a location problem
Do you mind if I apply the generic optimization approach shown here: https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/OneForAll.adoc to this problem to compare results? I see you collected a huge number of benchmark instances. Are there solutions proven to be optimal availabe for these?
- New generic method to solve MMKP and VRPTW
-
29 Python real world optimization tutorials
using Python you may get some inspiration here: https://github.com/dietmarwo/fast-cma-es/blob/master/tutorials/Tutorials.adoc
LightGBM
-
SIRUS.jl: Interpretable Machine Learning via Rule Extraction
SIRUS.jl is a pure Julia implementation of the SIRUS algorithm by Bénard et al. (2021). The algorithm is a rule-based machine learning model meaning that it is fully interpretable. The algorithm does this by firstly fitting a random forests and then converting this forest to rules. Furthermore, the algorithm is stable and achieves a predictive performance that is comparable to LightGBM, a state-of-the-art gradient boosting model created by Microsoft. Interpretability, stability, and predictive performance are described in more detail below.
-
[D] RAM speeds for tabular machine learning algorithms
Hey, thanks everybody for your answers. I've asked around in the XGBoost and LightGBM repos and some folks there also agreed that memory speed will be a bottleneck yes.
-
[P] LightGBM but lighter in another language?
LightBGM seems to have C API support, and C++ example in the main repo
-
Use whatever is best for the problem, but still
LGBM doesn't do RF well, but it's easy to manually bag single LGBM trees.
-
What's New with AWS: Amazon SageMaker built-in algorithms now provides four new Tabular Data Modeling Algorithms
LightGBM is a popular and high-performance open-source implementation of the Gradient Boosting Decision Tree (GBDT). To learn how to use this algorithm, please see example notebooks for Classification and Regression.
-
Search YouTube from the terminal written in python
Microsoft lightGBM. https://github.com/microsoft/LightGBM
-
LightGBM VS CXXGraph - a user suggested alternative
2 projects | 28 Feb 2022
-
Writing the fastest GBDT libary in Rust
Here are our benchmarks on training time comparing Tangram's Gradient Boosted Decision Tree Library to LightGBM, XGBoost, CatBoost, and sklearn.
-
Workstation Management With Nix Flakes: Build a Cmake C++ Package
{ inputs = { nixpkgs = { url = "github:nixos/nixpkgs/nixos-unstable"; }; flake-utils = { url = "github:numtide/flake-utils"; }; }; outputs = { nixpkgs, flake-utils, ... }: flake-utils.lib.eachDefaultSystem (system: let pkgs = import nixpkgs { inherit system; }; lightgbm-cli = (with pkgs; stdenv.mkDerivation { pname = "lightgbm-cli"; version = "3.3.1"; src = fetchgit { url = "https://github.com/microsoft/LightGBM"; rev = "v3.3.1"; sha256 = "pBrsey0RpxxvlwSKrOJEBQp7Hd9Yzr5w5OdUuyFpgF8="; fetchSubmodules = true; }; nativeBuildInputs = [ clang cmake ]; buildPhase = "make -j $NIX_BUILD_CORES"; installPhase = '' mkdir -p $out/bin mv $TMP/LightGBM/lightgbm $out/bin ''; } ); in rec { defaultApp = flake-utils.lib.mkApp { drv = defaultPackage; }; defaultPackage = lightgbm-cli; devShell = pkgs.mkShell { buildInputs = with pkgs; [ lightgbm-cli ]; }; } ); }
-
Is it possible to clean memory after using a package that has a memory leak in my python script?
I'm working on the AutoML python package (Github repo). In my package, I'm using many different algorithms. One of the algorithms is LightGBM. The algorithm after the training doesn't release the memory, even if del is called and gc.collect() after. I created the issue on LightGBM GitHub -> link. Because of this leak, memory consumption is growing very fast during algorithm training.
What are some alternatives?
optiseek - An open source collection of single-objective optimization algorithms for multi-dimensional functions.
tensorflow - An Open Source Machine Learning Framework for Everyone
scikit-opt - Genetic Algorithm, Particle Swarm Optimization, Simulated Annealing, Ant Colony Optimization Algorithm,Immune Algorithm, Artificial Fish Swarm Algorithm, Differential Evolution and TSP(Traveling salesman)
H2O - H2O is an Open Source, Distributed, Fast & Scalable Machine Learning Platform: Deep Learning, Gradient Boosting (GBM) & XGBoost, Random Forest, Generalized Linear Modeling (GLM with Elastic Net), K-Means, PCA, Generalized Additive Models (GAM), RuleFit, Support Vector Machine (SVM), Stacked Ensembles, Automatic Machine Learning (AutoML), etc.
ExpensiveOptimBenchmark - Benchmarking Surrogate-based Optimisation Algorithms on Expensive Black-box Functions
GPBoost - Combining tree-boosting with Gaussian process and mixed effects models
Multi-UAV-Task-Assignment-Benchmark - A Benchmark for Multi-UAV Task Allocation of an Extended Team Orienteering Problem
amazon-sagemaker-examples - Example 📓 Jupyter notebooks that demonstrate how to build, train, and deploy machine learning models using 🧠 Amazon SageMaker.
Taskflow - A General-purpose Parallel and Heterogeneous Task Programming System
yggdrasil-decision-forests - A library to train, evaluate, interpret, and productionize decision forest models such as Random Forest and Gradient Boosted Decision Trees.
pycma - Python implementation of CMA-ES
mljar-supervised - Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation