softlearning
stable-baselines3-contrib
softlearning | stable-baselines3-contrib | |
---|---|---|
4 | 6 | |
1,166 | 431 | |
2.4% | 4.9% | |
0.0 | 6.7 | |
6 months ago | 10 days ago | |
Python | Python | |
GNU General Public License v3.0 or later | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
softlearning
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
# see https://github.com/rail-berkeley/softlearning/issues/60
-
Infinite Horizon problem with SAC and custom environment
Found relevant code at https://github.com/rail-berkeley/softlearning + all code implementations here
-
SAC: Enforcing Action Bounds formula derivation
Code for https://arxiv.org/abs/1812.05905 found: https://github.com/rail-berkeley/softlearning
-
DDPG not solving MountainCarContinuous
You may read - issue with SAC (https://github.com/rail-berkeley/softlearning/issues/76 ), solution: use large OU noise or use other type of exploration like gSDE
stable-baselines3-contrib
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
# https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/blob/master/sb3_contrib/tqc/tqc.py :
-
Understanding Action Masking in RLlib
Here's a theoretical overview and an implementation of action masking for PPO.
-
PPO rollout buffer for turn-based two-player game with varying turn lengths
Simplified version of rollout collection (adapted from ppo_mask.py line 282):
-
GitHub Copilot: your AI pair programmer
Transformers (GPT-3) aren't quite _supervised_, but it does require valid samples.
Agree 100% with RL being the path forward. You probably have already seen ( https://venturebeat.com/2021/06/09/deepmind-says-reinforceme... ). Personally I'm really stoked for this https://github.com/Stable-Baselines-Team/stable-baselines3-c... , which will make it a lot easier for rubes like me to use RL.
-
[P] Stable-Baselines3 v1.0 - Reliable implementations of RL algorithms
But as we already have vanilla DQN and QR-DQN (in our contrib repo: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ) I think it is already a good start for off-policy discrete action algorithms. (QR-DQN is usually competitive vs DQN+extensions)
What are some alternatives?
deep-RL-trading - playing idealized trading games with deep reinforcement learning
muzero-general - MuZero
Note - Easily implement parallel training and distributed training. Machine learning library. Note.neuralnetwork.tf package include Llama2, Llama3, Gemma, CLIP, ViT, ConvNeXt, BEiT, Swin Transformer, Segformer, etc, these models built with Note are compatible with TensorFlow and can be trained with TensorFlow.
TabNine - AI Code Completions
tmrl - Reinforcement Learning for real-time applications - host of the TrackMania Roborace League
stable-baselines3-c
rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
copilot-cli - The AWS Copilot CLI is a tool for developers to build, release and operate production ready containerized applications on AWS App Runner or Amazon ECS on AWS Fargate.
LiDAR-Guide - LiDAR Guide
trax - Trax — Deep Learning with Clear Code and Speed
dreamerv2 - Mastering Atari with Discrete World Models