The loss function of my model (Not a NN model) is not differentiable, what should I do?

This page summarizes the projects mentioned and recommended in the original post on /r/MLQuestions

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • OnGrad

    A derivative free reinforcement learning algo

  • I made a little algo I use for non-differentiable loss functions. The general idea is that we estimate the gradient by scoring noise in weights. Each step, instead of starting from scratch we start from near the previous gradient estimations and hopefully only calculate as many samples that are needed to "saturate" the estimate. Although it's a reinforcement algorithm, you can get score the model via your own loss function. The usage is very abstract such that you supply your own model and get/set params. The algorithm itself doesn't really care about any of that. It's worked pretty well for my use cases, feel free to give it a try- https://github.com/ben-arnao/OnGrad

  • optuna

    A hyperparameter optimization framework

  • if your parameter set is not too large, you could try black-box optimization via something like Optuna

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts