Is it possible to modify the reward function during training of an agent using OpenAI/Stable-Baselines3?

This page summarizes the projects mentioned and recommended in the original post on reddit.com/r/reinforcementlearning

Our great sponsors
  • Scout APM - Less time debugging, more time building
  • SonarLint - Clean code begins in your IDE with SonarLint
  • SaaSHub - Software Alternatives and Reviews
  • gym

    A toolkit for developing and comparing reinforcement learning algorithms.

    I would recommend doing this using an environment RewardWrapper. Here you have an example https://github.com/openai/gym/blob/master/gym/wrappers/transform_reward.py

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts