tf2-bot-kicker VS seed_rl

Compare tf2-bot-kicker vs seed_rl and see what are their differences.

tf2-bot-kicker

A python program that kicks those nasty name-stealing bots in TF2 (by boyonk913)

seed_rl

SEED RL: Scalable and Efficient Deep-RL with Accelerated Central Inference. Implements IMPALA and R2D2 algorithms in TF2 with SEED's architecture. (by google-research)
Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
tf2-bot-kicker seed_rl
6 8
5 760
- -
0.0 0.0
over 2 years ago over 1 year ago
Python Python
GNU General Public License v3.0 or later Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

tf2-bot-kicker

Posts with mentions or reviews of tf2-bot-kicker. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-04-25.
  • a bot that hunt other bots
    1 project | /r/tf2 | 3 Jun 2022
    I had the same idea, then came across a script that will auto-kick the name stealing bots (and any others you tell it about) - https://github.com/boyonk913/tf2-bot-kicker
  • Auto bot kicker script
    1 project | /r/tf2 | 3 Jun 2022
  • Auto-Anti bot software, why is no one using this?
    1 project | /r/tf2 | 1 May 2021
    Link to the download site: boyonkgit/tf2-bot-kicker: A python program that kicks those nasty name-stealing bots in TF2 (github.com)
  • At this point I don't think Valve cares anymore.
    1 project | /r/tf2 | 29 Apr 2021
    There's an auto-votekicking script, if everyone had it the bots would get kicked about as fast as they could join. I haven't tried personally because I'm lazy but here it is. GitHub - boyonkgit/tf2-bot-kicker: A python program that kicks those nasty name-stealing bots in TF2
  • I created a TF2 Bot Kicker! (open source)
    1 project | /r/truetf2 | 25 Apr 2021
    You can download the new version using the same link as always.
    2 projects | /r/tf2 | 25 Apr 2021
    All information can be found here: https://github.com/boyonkgit/tf2-bot-kicker#readme, but I'll quickly paste the tldr how it works below.

seed_rl

Posts with mentions or reviews of seed_rl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-03-08.
  • Fast and hackable frameworks for RL research
    4 projects | /r/reinforcementlearning | 8 Mar 2023
    I'm tired of having my 200m frames of Atari take 5 days to run with dopamine, so I'm looking for another framework to use. I haven't been able to find one that's fast and hackable, preferably distributed or with vectorized environments. Anybody have suggestions? seed-rl seems promising but is archived (and in TF2). sample-factory seems super fast but to the best of my knowledge doesn't work with replay buffers. I've been trying to get acme working but documentation is sparse and many of the features are broken.
  • [Q]Official seed_rl repo is archived.. any alternative seed_rl style drl repo??
    1 project | /r/reinforcementlearning | 17 Dec 2022
    Hey guys! I was fascinated by the concept of the seed_rl when it first came out because I believe that it could accelerate the training speed in local single machine environment. But I found that the official repo is recently archived and no longer maintains.. So I’m looking for alternatives which I can use seed_rl type distributed RL. Ray(or Rllib) is the most using drl librarys, but it doesn’t seems like using the seed_rl style. Anyone can recommend distributed RL librarys for it, or good for research and for lot’s of code modification? Is RLLib worth to use in single local machine training despite those cons? Thank you!!
  • V-MPO - what do you think
    2 projects | /r/reinforcementlearning | 20 Jun 2022
    You may have a look at the implementation from here. https://github.com/google-research/seed_rl
  • Need some help understanding what steps to take to debug a RL agent
    1 project | /r/learnmachinelearning | 17 Jul 2021
    For some context, this is an algo trading bot that's trained on intraday time series stock data. I'm using Google Research's SEED RL codebase with vtrace. The model has a sequence length of 240, and 30 features. Each iteration represents training on a batch of 256 samples, and there are 256 environments being sampled from at a time. A reward is applied when the agent closes a position, and the size of the reward is based on how much profit (positive or negative) was made. The agent is forced to close its remaining position at the end of each day, resulting in a larger negative reward than normal if it had a large and unprofitable position.
  • Strange results from training with Google Cloud TPUs, seem to be very inefficient?
    1 project | /r/learnmachinelearning | 15 Jul 2021
    I've been doing some tests to find the most efficient configuration for training using Google Cloud AI Platform. The results are here (note that "step" in this case represents a single sample/observation/frame from a single environment; iteration represents running the minimization function on a single batch). The results are a bit strange. I was under the assumption that training with TPUs would be one of the most efficient ways to train, but instead it's the least efficient by a wide margin. I'm using Google Research's SEED RL codebase, so I'm assuming there are no bugs in my code.
  • Strange training results: why is a batch size of 1 more efficient than larger batch sizes, despite using a GPU/TPU?
    1 project | /r/learnmachinelearning | 14 Jul 2021
    I'm currently doing some tests in preparation for my first real bit of training. I'm using Google Cloud AI Platform to train, and am trying to find the optimal machine setup. It's a work in progress, but here's a table I'm putting together to get a sense of the efficiency of each setup. On the left you'll see the accelerator type, ordered from least to most expensive. Here you'll also find the number of accelerator's used, the cost per hour, and the batch size. To the right are the average time it took to complete an entire training iteration and how long it took to complete the minimization step. You'll notice that the values are almost identical for each setup; I'm using Google Research's SEED RL, so I thought to record both values since I'm not sure exactly everything that happens between iterations. Turns out it's not much. There's also a calculation of the the time it takes to complete a single "step" (aka, a single observation from a single environment), as well as the average cost per step.
  • Having trouble passing custom flags with AI Platform
    1 project | /r/googlecloud | 29 Jun 2021
    I'm trying to get Google Research's SEED project working with some tweaks specific to my use case. One of the changes is that I need to pass more custom flags than they do in the samples they provide in their setup.sh file (ie, environment , agent, actors_per_worker, etc). I've added flags.DEFINE_integer/float/string/etc calls to the project files for my custom flags, but it's throwing the following error: FATAL Flags parsing error: Unknown command line flag 'num_actors_with_summaries'. This error is not being thrown for the custom flags they pass, only the ones I've added. For the life of me I can't figure out what it is they're doing differently than me.
  • New to Linux, trying to understand why a variable isn't getting assigned in an .sh file
    1 project | /r/linuxquestions | 20 Jun 2021
    I'm trying to get a the SEED project by Google Research working. This is my first time doing anything with Linux, so I'm a bit lost in understanding why a specific line isn't working. The line in question is line 21 of this file. Line 22 outputs the following error: /../docker/push.sh: No such file or directory exists. I added a printf after line 21 as follows: printf "test: %s\n" $DIR. It outputs the following: test: .

What are some alternatives?

When comparing tf2-bot-kicker and seed_rl you can also consider the following projects:

tf2_bot_detector - Automatically detects and votekicks cheaters/bots in TF2 casual.

muzero-general - MuZero

caveclient - A useful client for Team Fortress 2, designed to make your TF2 experience better.

tianshou - An elegant PyTorch deep reinforcement learning library.

yolov4-custom-functions - A Wide Range of Custom Functions for YOLOv4, YOLOv4-tiny, YOLOv3, and YOLOv3-tiny Implemented in TensorFlow, TFLite, and TensorRT.

rl-baselines-zoo - A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.

Apache Impala - Apache Impala

machin - Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

tf2patcher - A patcher for TF2 that allows you to apply full-colored decals.

DI-engine - OpenDILab Decision AI Engine

CleanTF2plus - Clean TF2's sequel