tmrl
stable-baselines3
Our great sponsors
tmrl | stable-baselines3 | |
---|---|---|
11 | 46 | |
422 | 7,894 | |
10.0% | 5.2% | |
6.3 | 8.2 | |
4 days ago | 5 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
tmrl
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
Hi all! I'm implementing a TQC with n-step learning in Trackmania (I forked original repo from here: https://github.com/trackmania-rl/tmrl, my modified version here: https://github.com/Pheoxis/AITrackmania/tree/main). It compiles, but I am pretty sure that I implemented n-step learning incorrectly, but as a beginner I don't know what I did wrong. Here's my code before implementing n-step algorithm: https://github.com/Pheoxis/AITrackmania/blob/main/tmrl/custom/custom_algorithms.py. If anyone checked what I did wrong, I would be very grateful. I will also attach some plots from my last training and outputs from printed lines (print.txt), maybe it will help :) If you need any additional information feel free to ask.
- Training an unbeatable AI in Trackmania [video]
- Can you beat trackmania AI?
-
Python RL Environments on Windows
I don't know if it fits your need : https://github.com/trackmania-rl/tmrl
-
New to reinforcement learning.
Hi, if you are gonna train a deep RL algorithm on a real robot and you are a beginner, I suggest you try out tmrl. This will allow you to try out a readily available algorithm (Soft Actor-Critic) in real-time on a real video-game (TrackMania), as real-world-like proxy for all the concerns you will encounter on real robot, and to rather easily develop your own robot-learning pipeline from there for your own robot. The repo has a huge tutorial exactly for this purpose.
-
AI Learns Mario Kart Wii (Rainbow DQN)
I see, and how did you handle the simulator and dynamics? Did you "step" the game or did you capture screenshots with constant time interval in real-time? I am asking because tmrl uses the second option in TrackMania, which makes the approach generalizable to all video games including MarioKart, but so far we met no such success training CNNs with the Soft Actor-Critic familly. The setting is a bit more difficult because it uses continuous input inckuding the gas and break (I suppose you always send gas to the maximum?) and we don't use punishments for collisions and this kind of tricks, but still, if it works that well in your setting I think it should work similarly in ours.
-
Have you used any good DRL library?
I am very disappointed these guys don't cite tmrl :D
-
osu!
If you just want to make a Gym environment, focus on rtgym. You will want to have a way of retrieving observations, say, raw images for instance. You can do this with pywin32 as done here. You will also want to grab a reward signal, which will probably be the most challenging thing to do because you will have to compute that from screenshots since you don't have access to the game internals (in fact you could because the game is opensource, but I assume you don't want to go down this path). If there is something like a score counter that's not moving somewhere, I suggest you capture that and read the numbers individually with the 1-NN algorithm (which is done in the un-supported "trackmania nation forever" versions of the tmrl encironment btw if you need help doing that). Then you will need to input controls. I see that osu is controller by the mouse and keyboard, there are several solutions for that. You can use pywin32 I think, pyautogui, keyboard, and probably others. Once you have these basic building blocks, it will be fairly straightforward to use the rtgym API in order to build you real time Gym environment.
-
Is replay buffer can remove "done"?
Sometimes it is more than okay, it may be necessary. For instance in tmrl we do exactly that, because we are in a partially observable environment where we cannot say whether the next state will be terminal or not, and where what we try to actually optimize is an infinite sum of discounted rewards.
-
[P] DeepForSpeed: A self driving car in Need For Speed Most Wanted with just a single ConvNet to play ( inspired by nvidia )
Cool project. Shameless self-advertising here but you can use vgamepad to control the game with a virtual gamepad instead of key presses, which enables analog policies. We do this in TrackMania :)
stable-baselines3
-
Sim-to-real RL pipeline for open-source wheeled bipeds
The latest release (v3.0.0) of Upkie's software brings a functional sim-to-real reinforcement learning pipeline based on Stable Baselines3, with standard sim-to-real tricks. The pipeline trains on the Gymnasium environments distributed in upkie.envs (setup: pip install upkie) and is implemented in the PPO balancer. Here is a policy running on an Upkie:
-
[P] PettingZoo 1.24.0 has been released (including Stable-Baselines3 tutorials)
PettingZoo 1.24.0 is now live! This release includes Python 3.11 support, updated Chess and Hanabi environment versions, and many bugfixes, documentation updates and testing expansions. We are also very excited to announce 3 tutorials using Stable-Baselines3, and a full training script using CleanRL with TensorBoard and WandB.
-
[Question] Why there is so few algorithms implemented in SB3?
I am wondering why there is so few algorithms in Stable Baselines 3 (SB3, https://github.com/DLR-RM/stable-baselines3/tree/master)? I was expecting some algorithms like ICM, HIRO, DIAYN, ... Why there is no model-based, skill-chaining, hierarchical-RL, ... algorithms implemented there?
-
Stable baselines! Where my people at?
Discord is more focused, and they have a page for people who wants to contribute https://github.com/DLR-RM/stable-baselines3/blob/master/CONTRIBUTING.md
-
SB3 - NotImplementedError: Box([-1. -1. -8.], [1. 1. 8.], (3,), <class 'numpy.float32'>) observation space is not supported
Therefore, I debugged this error to the ReplayBuffer that was imported from `SB3`. This is the problem function -
- Exporting an A2C model created with stable-baselines3 to PyTorch
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
Have you ever wanted to use dm-control with stable-baselines3? Within Reinforcement learning (RL), a number of APIs are used to implement environments, with limited ability to convert between them. This makes training agents across different APIs highly difficult, and has resulted in a fractured ecosystem.
-
Stable-Baselines3 v1.8 Release
Changelog: https://github.com/DLR-RM/stable-baselines3/releases/tag/v1.8.0
-
[P] Reinforcement learning evolutionary hyperparameter optimization - 10x speed up
Great project! One question though, is there any reason why you are not using existing RL models instead of creating your own, such as stable baselines?
- Is stable-baselines3 compatible with gymnasium/gymnasium-robotics?
What are some alternatives?
drqv2 - DrQ-v2: Improved Data-Augmented Reinforcement Learning
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
tm-dashboard - Dashboard for Trackmania displaying a bunch of vehicle information on screen.
stable-baselines - A fork of OpenAI Baselines, implementations of reinforcement learning algorithms
wandb - 🔥 A tool for visualizing and tracking your machine learning experiments. This repo contains the CLI and Python API.
Pytorch - Tensors and Dynamic neural networks in Python with strong GPU acceleration
softlearning - Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm.
cleanrl - High-quality single file implementation of Deep Reinforcement Learning algorithms with research-friendly features (PPO, DQN, C51, DDPG, TD3, SAC, PPG)
acme - A library of reinforcement learning components and agents
tianshou - An elegant PyTorch deep reinforcement learning library.
vgamepad - Virtual XBox360 and DualShock4 gamepads in python
Super-mario-bros-PPO-pytorch - Proximal Policy Optimization (PPO) algorithm for Super Mario Bros