osqp_benchmarks
QP Benchmarks for the OSQP Solver against GUROBI, MOSEK, ECOS and qpOASES (by osqp)
Safe-Policy-Optimization
NeurIPS 2023: Safe Policy Optimization: A benchmark repository for safe reinforcement learning algorithms (by PKU-Alignment)
osqp_benchmarks | Safe-Policy-Optimization | |
---|---|---|
2 | 1 | |
90 | 292 | |
- | 2.7% | |
0.0 | 8.1 | |
11 months ago | about 2 months ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
osqp_benchmarks
Posts with mentions or reviews of osqp_benchmarks.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-04-30.
-
Optimization solvers: missing link for fully open-source energy system modeling
OSQP is fast, but only for QP, not LP. The "benchmarks" (https://github.com/osqp/osqp_benchmarks) include some important problem classes but are random so, for general QP, are not valid. On the industry standard benchmarks (http://plato.asu.edu/ftp/qpbench.html) OSQP doesn't look so good, and it's not even tested against commercial solvers (http://plato.asu.edu/ftp/cconvex.html). Our experience with it on general benchmarking problems is that it can struggle to get sufficiently accurate dual values to the extent that it fails to solve them. For certain classes of important QP problems, and when optimization to small tolerances is not required, it's undoubtedly a great solver - but it's not a general solver.
Safe-Policy-Optimization
Posts with mentions or reviews of Safe-Policy-Optimization.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-10-29.
-
SAFE-PANDA-GYM a modification to Panda - Gym to train Safe-RL agents
Support SafePO-Baselines to train the safe environments in our repo, which can be seen in the train_safe_rl_algorithms folder.
What are some alternatives?
When comparing osqp_benchmarks and Safe-Policy-Optimization you can also consider the following projects:
osqp-eigen - Simple Eigen-C++ wrapper for OSQP library
Safe-panda-gym - OpenaAI Gym Franka Emika Panda robot environment based on PyBullet.
l2rpn-baselines - L2RPN Baselines a repository to host baselines for l2rpn competitions.