neroRL VS recurrent-ppo-truncated-bptt

Compare neroRL vs recurrent-ppo-truncated-bptt and see what are their differences.

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
neroRL recurrent-ppo-truncated-bptt
3 6
26 106
- -
0.0 3.2
7 months ago 6 days ago
Jupyter Notebook Jupyter Notebook
MIT License MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

neroRL

Posts with mentions or reviews of neroRL. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-06-29.
  • Convergence of PPO
    1 project | /r/reinforcementlearning | 31 Jul 2021
    You can take a look at my implementation https://github.com/MarcoMeter/neroRL .
  • Recurrent PPO using truncated BPTT
    3 projects | /r/reinforcementlearning | 29 Jun 2021
    this implementation does BPTT only, but concerning normal BP performance I can refer you to my framework neroRL. The develop branch contains the BPTT implementation of the above recurrent baseline. In neroRL you can easily toggle BPTT on/off. Both codes are pretty much the same. neroRL has more tools and supports more features (e.g. multi-discrete actions).
  • Implementing A Recurrent Ppo Policy To Solve
    1 project | /r/reinforcementlearning | 3 Jan 2021
    If it is related to the recurrent policy implementation then the function for that is defined here. I merged the current state of the recurrent policy to the develop branch.

recurrent-ppo-truncated-bptt

Posts with mentions or reviews of recurrent-ppo-truncated-bptt. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-25.

What are some alternatives?

When comparing neroRL and recurrent-ppo-truncated-bptt you can also consider the following projects:

ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.

pomdp-baselines - Simple (but often Strong) Baselines for POMDPs in PyTorch, ICML 2022

snakeAI - testing MLP, DQN, PPO, SAC, policy-gradient by snake

PPO-PyTorch - Minimal implementation of clipped objective Proximal Policy Optimization (PPO) in PyTorch

pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)

ppo-implementation-details - The source code for the blog post The 37 Implementation Details of Proximal Policy Optimization

popgym - Partially Observable Process Gym

episodic-transformer-memory-ppo - Clean baseline implementation of PPO using an episodic TransformerXL memory

gym-continuousDoubleAuction - A custom MARL (multi-agent reinforcement learning) environment where multiple agents trade against one another (self-play) in a zero-sum continuous double auction. Ray [RLlib] is used for training.