How to deal with situations where the RL agent cannot act at every time step?

This page summarizes the projects mentioned and recommended in the original post on /r/reinforcementlearning

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
  • open_spiel

    OpenSpiel is a collection of environments and algorithms for research in general reinforcement learning and search/planning in games.

  • I've had some success using Action Masking - you can refer to here https://github.com/deepmind/open_spiel/blob/120420a74a69354d64c10b51cd129d4587f9f325/open_spiel/python/algorithms/dqn.py but for DQN you need to mask out q values for invalid actions (as well as masking them during prediction). In my case I'm able to place my mask in the observation so can fetch it quite easily during prediction but if that's not possible you could query it from the environment and store it in the replay buffer (like they do in the link I shared)

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts