softlearning
AITrackmania
softlearning | AITrackmania | |
---|---|---|
4 | 1 | |
1,168 | 2 | |
0.9% | - | |
0.0 | 8.3 | |
6 months ago | 5 months ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
softlearning
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
# see https://github.com/rail-berkeley/softlearning/issues/60
-
Infinite Horizon problem with SAC and custom environment
Found relevant code at https://github.com/rail-berkeley/softlearning + all code implementations here
-
SAC: Enforcing Action Bounds formula derivation
Code for https://arxiv.org/abs/1812.05905 found: https://github.com/rail-berkeley/softlearning
-
DDPG not solving MountainCarContinuous
You may read - issue with SAC (https://github.com/rail-berkeley/softlearning/issues/76 ), solution: use large OU noise or use other type of exploration like gSDE
AITrackmania
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
Hi all! I'm implementing a TQC with n-step learning in Trackmania (I forked original repo from here: https://github.com/trackmania-rl/tmrl, my modified version here: https://github.com/Pheoxis/AITrackmania/tree/main). It compiles, but I am pretty sure that I implemented n-step learning incorrectly, but as a beginner I don't know what I did wrong. Here's my code before implementing n-step algorithm: https://github.com/Pheoxis/AITrackmania/blob/main/tmrl/custom/custom_algorithms.py. If anyone checked what I did wrong, I would be very grateful. I will also attach some plots from my last training and outputs from printed lines (print.txt), maybe it will help :) If you need any additional information feel free to ask.
What are some alternatives?
deep-RL-trading - playing idealized trading games with deep reinforcement learning
stable-baselines3-contrib - Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code
Note - Easily implement parallel training and distributed training. Machine learning library. Note.neuralnetwork.tf package include Llama2, Llama3, Gemma, CLIP, ViT, ConvNeXt, BEiT, Swin Transformer, Segformer, etc, these models built with Note are compatible with TensorFlow and can be trained with TensorFlow.
tmrl - Reinforcement Learning for real-time applications - host of the TrackMania Roborace League
rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
LiDAR-Guide - LiDAR Guide
trax - Trax — Deep Learning with Clear Code and Speed
awesome-deep-trading - List of awesome resources for machine learning-based algorithmic trading
senza - Experiments with drone control and reinforcement learning.