seed_rl VS rl-baselines-zoo

Compare seed_rl vs rl-baselines-zoo and see what are their differences.

seed_rl

SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture. (by google-research)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
seed_rl rl-baselines-zoo
8 2
760 1,106
- -
0.0 0.0
over 1 year ago over 1 year ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

seed_rl

Posts with mentions or reviews of seed_rl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-08.
  • Fast and hackable frameworks for RL research
    4 projects | /r/reinforcementlearning | 8 Mar 2023
    I'm tired of having my 200m frames of Atari take 5 days to run with dopamine, so I'm looking for another framework to use. I haven't been able to find one that's fast and hackable, preferably distributed or with vectorized environments. Anybody have suggestions? seed-rl seems promising but is archived (and in TF2). sample-factory seems super fast but to the best of my knowledge doesn't work with replay buffers. I've been trying to get acme working but documentation is sparse and many of the features are broken.
  • [Q]Official seed_rl repo is archived.. any alternative seed_rl style drl repo??
    1 project | /r/reinforcementlearning | 17 Dec 2022
    Hey guys! I was fascinated by the concept of the seed_rl when it first came out because I believe that it could accelerate the training speed in local single machine environment. But I found that the official repo is recently archived and no longer maintains.. So I’m looking for alternatives which I can use seed_rl type distributed RL. Ray(or Rllib) is the most using drl librarys, but it doesn’t seems like using the seed_rl style. Anyone can recommend distributed RL librarys for it, or good for research and for lot’s of code modification? Is RLLib worth to use in single local machine training despite those cons? Thank you!!
  • V-MPO - what do you think
    2 projects | /r/reinforcementlearning | 20 Jun 2022
    You may have a look at the implementation from here. https://github.com/google-research/seed_rl
  • Need some help understanding what steps to take to debug a RL agent
    1 project | /r/learnmachinelearning | 17 Jul 2021
    For some context, this is an algo trading bot that's trained on intraday time series stock data. I'm using Google Research's SEED RL codebase with vtrace. The model has a sequence length of 240, and 30 features. Each iteration represents training on a batch of 256 samples, and there are 256 environments being sampled from at a time. A reward is applied when the agent closes a position, and the size of the reward is based on how much profit (positive or negative) was made. The agent is forced to close its remaining position at the end of each day, resulting in a larger negative reward than normal if it had a large and unprofitable position.
  • Strange results from training with Google Cloud TPUs, seem to be very inefficient?
    1 project | /r/learnmachinelearning | 15 Jul 2021
    I've been doing some tests to find the most efficient configuration for training using Google Cloud AI Platform. The results are here (note that "step" in this case represents a single sample/observation/frame from a single environment; iteration represents running the minimization function on a single batch). The results are a bit strange. I was under the assumption that training with TPUs would be one of the most efficient ways to train, but instead it's the least efficient by a wide margin. I'm using Google Research's SEED RL codebase, so I'm assuming there are no bugs in my code.
  • Strange training results: why is a batch size of 1 more efficient than larger batch sizes, despite using a GPU/TPU?
    1 project | /r/learnmachinelearning | 14 Jul 2021
    I'm currently doing some tests in preparation for my first real bit of training. I'm using Google Cloud AI Platform to train, and am trying to find the optimal machine setup. It's a work in progress, but here's a table I'm putting together to get a sense of the efficiency of each setup. On the left you'll see the accelerator type, ordered from least to most expensive. Here you'll also find the number of accelerator's used, the cost per hour, and the batch size. To the right are the average time it took to complete an entire training iteration and how long it took to complete the minimization step. You'll notice that the values are almost identical for each setup; I'm using Google Research's SEED RL, so I thought to record both values since I'm not sure exactly everything that happens between iterations. Turns out it's not much. There's also a calculation of the the time it takes to complete a single "step" (aka, a single observation from a single environment), as well as the average cost per step.
  • Having trouble passing custom flags with AI Platform
    1 project | /r/googlecloud | 29 Jun 2021
    I'm trying to get Google Research's SEED project working with some tweaks specific to my use case. One of the changes is that I need to pass more custom flags than they do in the samples they provide in their setup.sh file (ie, environment , agent, actors_per_worker, etc). I've added flags.DEFINE_integer/float/string/etc calls to the project files for my custom flags, but it's throwing the following error: FATAL Flags parsing error: Unknown command line flag 'num_actors_with_summaries'. This error is not being thrown for the custom flags they pass, only the ones I've added. For the life of me I can't figure out what it is they're doing differently than me.
  • New to Linux, trying to understand why a variable isn't getting assigned in an .sh file
    1 project | /r/linuxquestions | 20 Jun 2021
    I'm trying to get a the SEED project by Google Research working. This is my first time doing anything with Linux, so I'm a bit lost in understanding why a specific line isn't working. The line in question is line 21 of this file. Line 22 outputs the following error: /../docker/push.sh: No such file or directory exists. I added a printf after line 21 as follows: printf "test: %s\n" $DIR. It outputs the following: test: .

rl-baselines-zoo

Posts with mentions or reviews of rl-baselines-zoo. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-03-22.

What are some alternatives?

When comparing seed_rl and rl-baselines-zoo you can also consider the following projects:

muzero-general - MuZero

rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.

tianshou - An elegant PyTorch deep reinforcement learning library.

Minigrid - Simple and easily configurable grid world environments for reinforcement learning

Apache Impala - Apache Impala

pybullet-gym - Open-source implementations of OpenAI Gym MuJoCo environments for use with the OpenAI Gym Reinforcement Learning Research Platform.

machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

pytorch-blender - :sweat_drops: Seamless, distributed, real-time integration of Blender into PyTorch data pipelines

pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

tf2patcher - A patcher for TF2 that allows you to apply full-colored decals.

gym - A toolkit for developing and comparing reinforcement learning algorithms.