-
prima
PRIMA is a package for solving general nonlinear optimization problems without using derivatives. It provides the reference implementation for Powell's derivative-free optimization methods, i.e., COBYLA, UOBYQA, NEWUOA, BOBYQA, and LINCOA. PRIMA means Reference Implementation for Powell's methods with Modernization and Amelioration, P for Powell.
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
nlopt
library for nonlinear optimization, wrapping many algorithms for global and local, constrained or unconstrained, optimization
-
mystic
constrained nonlinear optimization for scientific machine learning, UQ, and AI (by uqfoundation)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
It sounds like this was a difficult task. The motivation to fulfill Prof. Powell's request and help the community of derivative-free optimization users must have been strong. Congratulations on your achievement!
From the GitHub README:
> In the past years, while working on PRIMA, I have spotted a dozen of bugs in reputable Fortran compilers and two bugs in MATLAB. Each of them represents days of bitter debugging, which finally led to the conclusion that it was not a problem in my code but a flaw in the Fortran compilers or in MATLAB. From a very unusual angle, this reflects how intensive the coding has been.
> The bitterness behind this "fun" fact is exactly why I work on PRIMA: I hope that all the frustrations that I have experienced will not happen to any user of Powell's methods anymore. I hope I am the last one in the world to decode a maze of 244 GOTOs in 7939 lines of Fortran 77 code — I have been doing this for three years and I do not want anyone else to do it again.
https://github.com/libprima/prima#a-fun-fact
Yes sometimes it’s hard to measure a derivative. Eg when doing hyperparameter tuning in ML, you can read out a metric at a given choice of parameters, but it’s generally not easy to get a gradient.
Shameless plug: I happen to have recently written a package for the opposite limit! It finds roots when you can only measure the derivative.
https://github.com/EFavDB/inchwormrf
Extraction from https://github.com/libprima/prima
“There do exist "translations" of Powell's Fortran 77 code into other languages. For example, NLopt contains a C version of COBYLA, NEWUOA, and BOBYQA, but the C code in NLopt is translated from the Fortran 77 code straightforwardly, if not automatically by f2c, and hence inherits the style, structure, and probably bugs of the original Fortran 77 implementation.”
See also https://github.com/stevengj/nlopt/issues/501 , where the author of NLopt talks.
Fair enough. Btw. IMO rewriting for GPU (if you do have the hardware) can be quite a bit simpler than doing vector optimisations for CPU, depending on the codebase. Back in my research days I actually created a framework for doing just that with Fortran: https://github.com/muellermichel/Hybrid-Fortran.
Related posts
-
Prima has got a Python interface
-
Nagfor supports half-precision floating-point numbers
-
PRIMA: Solving general nonlinear optimization problems without derivatives
-
Optimization Without Using Derivatives: the PRIMA Package, its Fortran Implementation, and Its Inclusion in SciPy - Announcements
-
PennyLane: Python library for differentiable programming of quantum computers