seed_rl

SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture. (by google-research)

Seed_rl Alternatives

Similar projects and alternatives to seed_rl

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better seed_rl alternative or higher similarity.

seed_rl reviews and mentions

Posts with mentions or reviews of seed_rl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-08.
  • Fast and hackable frameworks for RL research
    4 projects | /r/reinforcementlearning | 8 Mar 2023
    I'm tired of having my 200m frames of Atari take 5 days to run with dopamine, so I'm looking for another framework to use. I haven't been able to find one that's fast and hackable, preferably distributed or with vectorized environments. Anybody have suggestions? seed-rl seems promising but is archived (and in TF2). sample-factory seems super fast but to the best of my knowledge doesn't work with replay buffers. I've been trying to get acme working but documentation is sparse and many of the features are broken.
  • [Q]Official seed_rl repo is archived.. any alternative seed_rl style drl repo??
    1 project | /r/reinforcementlearning | 17 Dec 2022
    Hey guys! I was fascinated by the concept of the seed_rl when it first came out because I believe that it could accelerate the training speed in local single machine environment. But I found that the official repo is recently archived and no longer maintains.. So I’m looking for alternatives which I can use seed_rl type distributed RL. Ray(or Rllib) is the most using drl librarys, but it doesn’t seems like using the seed_rl style. Anyone can recommend distributed RL librarys for it, or good for research and for lot’s of code modification? Is RLLib worth to use in single local machine training despite those cons? Thank you!!
  • V-MPO - what do you think
    2 projects | /r/reinforcementlearning | 20 Jun 2022
    You may have a look at the implementation from here. https://github.com/google-research/seed_rl
  • Need some help understanding what steps to take to debug a RL agent
    1 project | /r/learnmachinelearning | 17 Jul 2021
    For some context, this is an algo trading bot that's trained on intraday time series stock data. I'm using Google Research's SEED RL codebase with vtrace. The model has a sequence length of 240, and 30 features. Each iteration represents training on a batch of 256 samples, and there are 256 environments being sampled from at a time. A reward is applied when the agent closes a position, and the size of the reward is based on how much profit (positive or negative) was made. The agent is forced to close its remaining position at the end of each day, resulting in a larger negative reward than normal if it had a large and unprofitable position.
  • Strange results from training with Google Cloud TPUs, seem to be very inefficient?
    1 project | /r/learnmachinelearning | 15 Jul 2021
    I've been doing some tests to find the most efficient configuration for training using Google Cloud AI Platform. The results are here (note that "step" in this case represents a single sample/observation/frame from a single environment; iteration represents running the minimization function on a single batch). The results are a bit strange. I was under the assumption that training with TPUs would be one of the most efficient ways to train, but instead it's the least efficient by a wide margin. I'm using Google Research's SEED RL codebase, so I'm assuming there are no bugs in my code.
  • Strange training results: why is a batch size of 1 more efficient than larger batch sizes, despite using a GPU/TPU?
    1 project | /r/learnmachinelearning | 14 Jul 2021
    I'm currently doing some tests in preparation for my first real bit of training. I'm using Google Cloud AI Platform to train, and am trying to find the optimal machine setup. It's a work in progress, but here's a table I'm putting together to get a sense of the efficiency of each setup. On the left you'll see the accelerator type, ordered from least to most expensive. Here you'll also find the number of accelerator's used, the cost per hour, and the batch size. To the right are the average time it took to complete an entire training iteration and how long it took to complete the minimization step. You'll notice that the values are almost identical for each setup; I'm using Google Research's SEED RL, so I thought to record both values since I'm not sure exactly everything that happens between iterations. Turns out it's not much. There's also a calculation of the the time it takes to complete a single "step" (aka, a single observation from a single environment), as well as the average cost per step.
  • Having trouble passing custom flags with AI Platform
    1 project | /r/googlecloud | 29 Jun 2021
    I'm trying to get Google Research's SEED project working with some tweaks specific to my use case. One of the changes is that I need to pass more custom flags than they do in the samples they provide in their setup.sh file (ie, environment , agent, actors_per_worker, etc). I've added flags.DEFINE_integer/float/string/etc calls to the project files for my custom flags, but it's throwing the following error: FATAL Flags parsing error: Unknown command line flag 'num_actors_with_summaries'. This error is not being thrown for the custom flags they pass, only the ones I've added. For the life of me I can't figure out what it is they're doing differently than me.
  • New to Linux, trying to understand why a variable isn't getting assigned in an .sh file
    1 project | /r/linuxquestions | 20 Jun 2021
    I'm trying to get a the SEED project by Google Research working. This is my first time doing anything with Linux, so I'm a bit lost in understanding why a specific line isn't working. The line in question is line 21 of this file. Line 22 outputs the following error: /../docker/push.sh: No such file or directory exists. I added a printf after line 21 as follows: printf "test: %s\n" $DIR. It outputs the following: test: .
  • A note from our sponsor - WorkOS
    workos.com | 18 Apr 2024
    The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning. Learn more →

Stats

Basic seed_rl repo stats
8
760
0.0
over 1 year ago
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com