seed_rl VS sample-factory

Compare seed_rl vs sample-factory and see what are their differences.

seed_rl

SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture. (by google-research)

sample-factory

High throughput synchronous and asynchronous reinforcement learning (by alex-petrenko)
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
seed_rl sample-factory
8 6
760 740
- -
0.0 8.1
over 1 year ago about 2 months ago
Python Python
Apache License 2.0 MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

seed_rl

Posts with mentions or reviews of seed_rl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-08.
  • Fast and hackable frameworks for RL research
    4 projects | /r/reinforcementlearning | 8 Mar 2023
    I'm tired of having my 200m frames of Atari take 5 days to run with dopamine, so I'm looking for another framework to use. I haven't been able to find one that's fast and hackable, preferably distributed or with vectorized environments. Anybody have suggestions? seed-rl seems promising but is archived (and in TF2). sample-factory seems super fast but to the best of my knowledge doesn't work with replay buffers. I've been trying to get acme working but documentation is sparse and many of the features are broken.
  • [Q]Official seed_rl repo is archived.. any alternative seed_rl style drl repo??
    1 project | /r/reinforcementlearning | 17 Dec 2022
    Hey guys! I was fascinated by the concept of the seed_rl when it first came out because I believe that it could accelerate the training speed in local single machine environment. But I found that the official repo is recently archived and no longer maintains.. So I’m looking for alternatives which I can use seed_rl type distributed RL. Ray(or Rllib) is the most using drl librarys, but it doesn’t seems like using the seed_rl style. Anyone can recommend distributed RL librarys for it, or good for research and for lot’s of code modification? Is RLLib worth to use in single local machine training despite those cons? Thank you!!
  • V-MPO - what do you think
    2 projects | /r/reinforcementlearning | 20 Jun 2022
    You may have a look at the implementation from here. https://github.com/google-research/seed_rl
  • Need some help understanding what steps to take to debug a RL agent
    1 project | /r/learnmachinelearning | 17 Jul 2021
    For some context, this is an algo trading bot that's trained on intraday time series stock data. I'm using Google Research's SEED RL codebase with vtrace. The model has a sequence length of 240, and 30 features. Each iteration represents training on a batch of 256 samples, and there are 256 environments being sampled from at a time. A reward is applied when the agent closes a position, and the size of the reward is based on how much profit (positive or negative) was made. The agent is forced to close its remaining position at the end of each day, resulting in a larger negative reward than normal if it had a large and unprofitable position.
  • Strange results from training with Google Cloud TPUs, seem to be very inefficient?
    1 project | /r/learnmachinelearning | 15 Jul 2021
    I've been doing some tests to find the most efficient configuration for training using Google Cloud AI Platform. The results are here (note that "step" in this case represents a single sample/observation/frame from a single environment; iteration represents running the minimization function on a single batch). The results are a bit strange. I was under the assumption that training with TPUs would be one of the most efficient ways to train, but instead it's the least efficient by a wide margin. I'm using Google Research's SEED RL codebase, so I'm assuming there are no bugs in my code.
  • Strange training results: why is a batch size of 1 more efficient than larger batch sizes, despite using a GPU/TPU?
    1 project | /r/learnmachinelearning | 14 Jul 2021
    I'm currently doing some tests in preparation for my first real bit of training. I'm using Google Cloud AI Platform to train, and am trying to find the optimal machine setup. It's a work in progress, but here's a table I'm putting together to get a sense of the efficiency of each setup. On the left you'll see the accelerator type, ordered from least to most expensive. Here you'll also find the number of accelerator's used, the cost per hour, and the batch size. To the right are the average time it took to complete an entire training iteration and how long it took to complete the minimization step. You'll notice that the values are almost identical for each setup; I'm using Google Research's SEED RL, so I thought to record both values since I'm not sure exactly everything that happens between iterations. Turns out it's not much. There's also a calculation of the the time it takes to complete a single "step" (aka, a single observation from a single environment), as well as the average cost per step.
  • Having trouble passing custom flags with AI Platform
    1 project | /r/googlecloud | 29 Jun 2021
    I'm trying to get Google Research's SEED project working with some tweaks specific to my use case. One of the changes is that I need to pass more custom flags than they do in the samples they provide in their setup.sh file (ie, environment , agent, actors_per_worker, etc). I've added flags.DEFINE_integer/float/string/etc calls to the project files for my custom flags, but it's throwing the following error: FATAL Flags parsing error: Unknown command line flag 'num_actors_with_summaries'. This error is not being thrown for the custom flags they pass, only the ones I've added. For the life of me I can't figure out what it is they're doing differently than me.
  • New to Linux, trying to understand why a variable isn't getting assigned in an .sh file
    1 project | /r/linuxquestions | 20 Jun 2021
    I'm trying to get a the SEED project by Google Research working. This is my first time doing anything with Linux, so I'm a bit lost in understanding why a specific line isn't working. The line in question is line 21 of this file. Line 22 outputs the following error: /../docker/push.sh: No such file or directory exists. I added a printf after line 21 as follows: printf "test: %s\n" $DIR. It outputs the following: test: .

sample-factory

Posts with mentions or reviews of sample-factory. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-05-22.
  • A minimal RL library for infinite horizon tasks
    2 projects | /r/reinforcementlearning | 22 May 2023
    I take a lot of inspiration from Sample Factory and RLlib for my own RL library's implementation. Although I thoroughly enjoy both of these libraries, they just didn't quite fit right with my use case which motivated me to start my own. Hopefully someone finds use in rlstack whether it be through direct usage or as inspiration for their own personalized library
  • Fast and hackable frameworks for RL research
    4 projects | /r/reinforcementlearning | 8 Mar 2023
    I'm tired of having my 200m frames of Atari take 5 days to run with dopamine, so I'm looking for another framework to use. I haven't been able to find one that's fast and hackable, preferably distributed or with vectorized environments. Anybody have suggestions? seed-rl seems promising but is archived (and in TF2). sample-factory seems super fast but to the best of my knowledge doesn't work with replay buffers. I've been trying to get acme working but documentation is sparse and many of the features are broken.
  • Multi-agent Decentralized Training with a PettingZoo environment
    1 project | /r/reinforcementlearning | 19 Jul 2022
    Hi, try sample-factory
  • How is IMPALA as a framework?
    2 projects | /r/reinforcementlearning | 1 Oct 2021
    Sample Factory: https://github.com/alex-petrenko/sample-factory
  • The Myth of a Superhuman AI
    1 project | news.ycombinator.com | 25 Aug 2021
    Everything in this reply is wrong.

    In AlphaZero for example, there were 44 million training games total for 700,00x0 steps of training for the full 9 hours.

    Turning that human-like numbers, 44million games with on average 60 moves, at 1 second thinking time per move,

    > 44000000*60/60/60/24/365 = 83,7138508371 years of training experience in 9 hours

    The whole field of Reinforcement learning has agents training and playing games for many orders of magnitude more time than a human ever will. In-fact, we can scale this to over 100k of actions per second, in a single machine:

    https://github.com/alex-petrenko/sample-factory

    Then, there is also distributed Reinforcement Learning, where hundreds of agents can play at different machines and share experience, see AlphaZero, LeelaZero, R2D2 agent, R2D3 agent, Apex, Acer, Asynchronous PPO.

    > but the data isn't useful without the context of experience

    The experience is the data in Reinforcement Learning.

    > and all processing power can do it overfit model without experience.

    That is wrong, the agents perform what is called exploration to avoid getting stuck in simple strategies.

    > Even if we put AI into an army of robots running around and experiencing things, there are still scaling limits to encoding and communicating knowledge and understanding.

    True, but machines scale better because they speak the same language, or they can learn to tune their language to get their message across.

    > Human organizations are a great example of the scaling limits of intelligence.

    Human organization is a testament to how far we can get with something as limiting as the commonly used language. The language that we use to communicate is subject to misinterpretation due to our subjective experiences, this limitation is not shared by machines.

  • Best PyTorch RL library for doing research
    9 projects | /r/reinforcementlearning | 30 Apr 2021
    I borrow a lot of performance tricks from sample factory, which is awesome but hard to modify from its original APPO algorithm. rlpyt was more modular, and I borrowed more ideas from it (namedarraytuple), but still too limited.

What are some alternatives?

When comparing seed_rl and sample-factory you can also consider the following projects:

muzero-general - MuZero

cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)

tianshou - An elegant PyTorch deep reinforcement learning library.

stable-baselines3 - PyTorch version of Stable Baselines, reliable implementations of reinforcement learning algorithms.

rl-baselines-zoo - A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.

Apache Impala - Apache Impala

machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

rl8 - A high throughput, end-to-end RL library for infinite horizon tasks.

pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

rlpyt - Reinforcement Learning in PyTorch