Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I made a little algo I use for non-differentiable loss functions. The general idea is that we estimate the gradient by scoring noise in weights. Each step, instead of starting from scratch we start from near the previous gradient estimations and hopefully only calculate as many samples that are needed to "saturate" the estimate. Although it's a reinforcement algorithm, you can get score the model via your own loss function. The usage is very abstract such that you supply your own model and get/set params. The algorithm itself doesn't really care about any of that. It's worked pretty well for my use cases, feel free to give it a try- https://github.com/ben-arnao/OnGrad
if your parameter set is not too large, you could try black-box optimization via something like Optuna