tmrl

Reinforcement Learning for real-time applications - host of the TrackMania Roborace League (by trackmania-rl)

Tmrl Alternatives

Similar projects and alternatives to tmrl

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better tmrl alternative or higher similarity.

tmrl reviews and mentions

Posts with mentions or reviews of tmrl. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.
  • Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
    4 projects | /r/reinforcementlearning | 9 Dec 2023
    Hi all! I'm implementing a TQC with n-step learning in Trackmania (I forked original repo from here: https://github.com/trackmania-rl/tmrl, my modified version here: https://github.com/Pheoxis/AITrackmania/tree/main). It compiles, but I am pretty sure that I implemented n-step learning incorrectly, but as a beginner I don't know what I did wrong. Here's my code before implementing n-step algorithm: https://github.com/Pheoxis/AITrackmania/blob/main/tmrl/custom/custom_algorithms.py. If anyone checked what I did wrong, I would be very grateful. I will also attach some plots from my last training and outputs from printed lines (print.txt), maybe it will help :) If you need any additional information feel free to ask.
  • Training an unbeatable AI in Trackmania [video]
    1 project | news.ycombinator.com | 7 Oct 2023
  • Can you beat trackmania AI?
    1 project | /r/TrackMania | 17 Apr 2023
  • Python RL Environments on Windows
    1 project | /r/reinforcementlearning | 8 Mar 2023
    I don't know if it fits your need : https://github.com/trackmania-rl/tmrl
  • New to reinforcement learning.
    3 projects | /r/reinforcementlearning | 7 Nov 2022
    Hi, if you are gonna train a deep RL algorithm on a real robot and you are a beginner, I suggest you try out tmrl. This will allow you to try out a readily available algorithm (Soft Actor-Critic) in real-time on a real video-game (TrackMania), as real-world-like proxy for all the concerns you will encounter on real robot, and to rather easily develop your own robot-learning pipeline from there for your own robot. The repo has a huge tutorial exactly for this purpose.
  • AI Learns Mario Kart Wii (Rainbow DQN)
    1 project | /r/reinforcementlearning | 17 Jul 2022
    I see, and how did you handle the simulator and dynamics? Did you "step" the game or did you capture screenshots with constant time interval in real-time? I am asking because tmrl uses the second option in TrackMania, which makes the approach generalizable to all video games including MarioKart, but so far we met no such success training CNNs with the Soft Actor-Critic familly. The setting is a bit more difficult because it uses continuous input inckuding the gas and break (I suppose you always send gas to the maximum?) and we don't use punishments for collisions and this kind of tricks, but still, if it works that well in your setting I think it should work similarly in ours.
  • Have you used any good DRL library?
    5 projects | /r/reinforcementlearning | 7 Jun 2022
    I am very disappointed these guys don't cite tmrl :D
  • osu!
    1 project | /r/reinforcementlearning | 1 May 2022
    If you just want to make a Gym environment, focus on rtgym. You will want to have a way of retrieving observations, say, raw images for instance. You can do this with pywin32 as done here. You will also want to grab a reward signal, which will probably be the most challenging thing to do because you will have to compute that from screenshots since you don't have access to the game internals (in fact you could because the game is opensource, but I assume you don't want to go down this path). If there is something like a score counter that's not moving somewhere, I suggest you capture that and read the numbers individually with the 1-NN algorithm (which is done in the un-supported "trackmania nation forever" versions of the tmrl encironment btw if you need help doing that). Then you will need to input controls. I see that osu is controller by the mouse and keyboard, there are several solutions for that. You can use pywin32 I think, pyautogui, keyboard, and probably others. Once you have these basic building blocks, it will be fairly straightforward to use the rtgym API in order to build you real time Gym environment.
  • Is replay buffer can remove "done"?
    2 projects | /r/reinforcementlearning | 1 Apr 2022
    Sometimes it is more than okay, it may be necessary. For instance in tmrl we do exactly that, because we are in a partially observable environment where we cannot say whether the next state will be terminal or not, and where what we try to actually optimize is an infinite sum of discounted rewards.
  • [P] DeepForSpeed: A self driving car in Need For Speed Most Wanted with just a single ConvNet to play ( inspired by nvidia )
    4 projects | /r/MachineLearning | 19 Mar 2022
    Cool project. Shameless self-advertising here but you can use vgamepad to control the game with a virtual gamepad instead of key presses, which enables analog policies. We do this in TrackMania :)
  • A note from our sponsor - InfluxDB
    www.influxdata.com | 25 Apr 2024
    Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →

Stats

Basic tmrl repo stats
11
422
6.3
2 days ago

trackmania-rl/tmrl is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of tmrl is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com