softlearning VS AITrackmania

Compare softlearning vs AITrackmania and see what are their differences.

softlearning

Softlearning is a reinforcement learning framework for training maximum entropy policies in continuous domains. Includes the official implementation of the Soft Actor-Critic algorithm. (by rail-berkeley)
Scout Monitoring - Free Django app performance insights with Scout Monitoring
Get Scout setup in minutes, and let us sweat the small stuff. A couple lines in settings.py is all you need to start monitoring your apps. Sign up for our free tier today.
www.scoutapm.com
featured
InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
softlearning AITrackmania
4 1
1,168 2
0.9% -
0.0 8.3
6 months ago 5 months ago
Python Python
GNU General Public License v3.0 or later MIT License
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

softlearning

Posts with mentions or reviews of softlearning. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.

AITrackmania

Posts with mentions or reviews of AITrackmania. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-12-09.
  • Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
    4 projects | /r/reinforcementlearning | 9 Dec 2023
    Hi all! I'm implementing a TQC with n-step learning in Trackmania (I forked original repo from here: https://github.com/trackmania-rl/tmrl, my modified version here: https://github.com/Pheoxis/AITrackmania/tree/main). It compiles, but I am pretty sure that I implemented n-step learning incorrectly, but as a beginner I don't know what I did wrong. Here's my code before implementing n-step algorithm: https://github.com/Pheoxis/AITrackmania/blob/main/tmrl/custom/custom_algorithms.py. If anyone checked what I did wrong, I would be very grateful. I will also attach some plots from my last training and outputs from printed lines (print.txt), maybe it will help :) If you need any additional information feel free to ask.

What are some alternatives?

When comparing softlearning and AITrackmania you can also consider the following projects:

deep-RL-trading - playing idealized trading games with deep reinforcement learning

stable-baselines3-contrib - Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code

Note - Easily implement parallel training and distributed training. Machine learning library. Note.neuralnetwork.tf package include Llama2, Llama3, Gemma, CLIP, ViT, ConvNeXt, BEiT, Swin Transformer, Segformer, etc, these models built with Note are compatible with TensorFlow and can be trained with TensorFlow.

tmrl - Reinforcement Learning for real-time applications - host of the TrackMania Roborace League

rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.

LiDAR-Guide - LiDAR Guide

trax - Trax — Deep Learning with Clear Code and Speed

awesome-deep-trading - List of awesome resources for machine learning-based algorithmic trading

senza - Experiments with drone control and reinforcement learning.