optuna VS OnGrad

Compare optuna vs OnGrad and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
optuna OnGrad
34 6
9,681 3
2.2% -
9.9 0.0
7 days ago about 2 years ago
Python Python
GNU General Public License v3.0 or later -
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

optuna

Posts with mentions or reviews of optuna. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2024-04-06.

OnGrad

Posts with mentions or reviews of OnGrad. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2022-04-15.
  • made an RL algo for modeling episode reward directly
    1 project | /r/reinforcementlearning | 28 Jul 2022
  • The loss function of my model (Not a NN model) is not differentiable, what should I do?
    2 projects | /r/MLQuestions | 15 Apr 2022
    I made a little algo I use for non-differentiable loss functions. The general idea is that we estimate the gradient by scoring noise in weights. Each step, instead of starting from scratch we start from near the previous gradient estimations and hopefully only calculate as many samples that are needed to "saturate" the estimate. Although it's a reinforcement algorithm, you can get score the model via your own loss function. The usage is very abstract such that you supply your own model and get/set params. The algorithm itself doesn't really care about any of that. It's worked pretty well for my use cases, feel free to give it a try- https://github.com/ben-arnao/OnGrad
  • How can I find an optimal policy for a problem that involves a large combinatorial search space? I'm kind of stuck, and I'm not sure how to proceed.
    1 project | /r/reinforcementlearning | 8 Mar 2022
    I'm not sure I understand the problem completely but I've developed an alternative RL-esque algorithm to deal with weird types of problems/environments (ex. non-differentiable). Maybe this would fall into that category? It's very flexible as you really only need to "score" a set of model weights, what model you use and how you score is entirely up to you. Let me know if you find it useful! https://github.com/ben-arnao/OnGrad
  • How to minimize the number of values that are not 0?
    1 project | /r/MLQuestions | 20 Jan 2022
    That's true. I don't know if your problem is differentiable then, so standard optimizers might not work. If you're interested, I made a small derivate-free library. It's for reinforcement learning but as long as you define your own loss function for maximization instead of minimization you might still be able to use it as is. https://github.com/ben-arnao/OnGrad
  • Reinforcement Learning using Natural Selection
    2 projects | /r/reinforcementlearning | 18 Sep 2021
    That paper actually served as an inspiration for an alternative method I created as well. I use final episode score directly though because this is usually what we really care about. By using episode score we eliminate the messiness of time horizons, reward back propagation, etc. Plus a lot of these policy gradient methods usually on some level optimizing a loss function which isn't truly representative of the underlying function and is only correlated (ex. Not all problems are even differentiable at all). I'm not sure if it makes sense for all use cases but for some it is very promising. I want to keep trying with different problems and play with the config. Doesn't seem very similar to your method but I think both try to solve some of these issues https://github.com/ben-arnao/OnGrad
  • I'm working in the field of applied RL as a PhD student. I'm stuck for a couple of weeks now and looking for a RL expert who could give me some advice.
    1 project | /r/reinforcementlearning | 1 Jul 2021
    I've always had a bad time with PPO. Seems too complicated for it's own good and *maybe* only if you work at your issue for a while and tune hyperparameters and set things up correctly to the T you'll get some OK results. That's why I created my own RIL algo. Feel free to give it a try and let me know if it works for you. https://github.com/ben-arnao/OnGrad

What are some alternatives?

When comparing optuna and OnGrad you can also consider the following projects:

Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

hyperopt - Distributed Asynchronous Hyperparameter Optimization in Python

rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.

nni - An open source AutoML toolkit for automate machine learning lifecycle, including feature engineering, neural architecture search, model compression and hyper-parameter tuning.

mljar-supervised - Python package for AutoML on Tabular Data with Feature Engineering, Hyper-Parameters Tuning, Explanations and Automatic Documentation

pyGAM - [HELP REQUESTED] Generalized Additive Models in Python

pg_plan_advsr - PostgreSQL extension for automated execution plan tuning

SMAC3 - SMAC3: A Versatile Bayesian Optimization Package for Hyperparameter Optimization

Empirical_Study_of_Ensemble_Learning_Methods - Training ensemble machine learning classifiers, with flexible templates for repeated cross-validation and parameter tuning

optuna-examples - Examples for https://github.com/optuna/optuna

xsimd - C++ wrappers for SIMD intrinsics and parallelized, optimized mathematical functions (SSE, AVX, AVX512, NEON, SVE))

highway - Performance-portable, length-agnostic SIMD with runtime dispatch