Our great sponsors
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
I don't see the buffer being cleared anywhere, but it looks to me like it may not need to... For example, the implementation of SeparatedReplayBuffer receives the episode_length (or "horizon" as is sometimes called) and sets the size of the buffer accordingly when its initialized. That way, the amount of samples collected before each policy/value update is constant. You just need one giant tensor block to collect all your samples, then after doing a networks update, why clear them out? Just overwrite the existing samples, since you know you'll collect exactly the same number of new samples.
Related posts
- How do you compute rewards when you are using parallel environments?
- Renderer of the environment does not work?
- Stuck on this error for days: I can't use importlib the right way
- Difference between setup.py, environments.yaml and requirements.txt
- Difference between setup.py, environments.yaml and requirements.txt