holodeck
stable-baselines3-contrib
Our great sponsors
holodeck | stable-baselines3-contrib | |
---|---|---|
1 | 6 | |
564 | 422 | |
0.0% | 6.7% | |
0.0 | 6.6 | |
about 2 years ago | 18 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
holodeck
-
[P] Doing a clone of Rocket League for AI experiments. Trained an agent to air dribble the ball.
Tangentially related, but people interested in game engines for RL should check out Holodeck built on Unreal https://github.com/byu-pccl/holodeck
stable-baselines3-contrib
-
Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm.
# https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/blob/master/sb3_contrib/tqc/tqc.py :
-
Understanding Action Masking in RLlib
Here's a theoretical overview and an implementation of action masking for PPO.
-
PPO rollout buffer for turn-based two-player game with varying turn lengths
Simplified version of rollout collection (adapted from ppo_mask.py line 282):
-
GitHub Copilot: your AI pair programmer
Transformers (GPT-3) aren't quite _supervised_, but it does require valid samples.
Agree 100% with RL being the path forward. You probably have already seen ( https://venturebeat.com/2021/06/09/deepmind-says-reinforceme... ). Personally I'm really stoked for this https://github.com/Stable-Baselines-Team/stable-baselines3-c... , which will make it a lot easier for rubes like me to use RL.
-
[P] Stable-Baselines3 v1.0 - Reliable implementations of RL algorithms
But as we already have vanilla DQN and QR-DQN (in our contrib repo: https://github.com/Stable-Baselines-Team/stable-baselines3-contrib ) I think it is already a good start for off-policy discrete action algorithms. (QR-DQN is usually competitive vs DQN+extensions)
What are some alternatives?
MATLAB-Simulink-Challenge-Project-Hub - This MATLAB and Simulink Challenge Project Hub contains a list of research and design project ideas. These projects will help you gain practical experience and insight into technology trends and industry directions.
muzero-general - MuZero
habitat-api - A modular high-level library to train embodied AI agents across a variety of tasks, environments, and simulators. [Moved to: https://github.com/facebookresearch/habitat-lab]
TabNine - AI Code Completions
habitat-lab - A modular high-level library to train embodied AI agents across a variety of tasks and environments.
stable-baselines3-c
ue4-docker - Windows and Linux containers for Unreal Engine 4
copilot-cli - The AWS Copilot CLI is a tool for developers to build, release and operate production ready containerized applications on AWS App Runner or Amazon ECS on AWS Fargate.
ml-agents - The Unity Machine Learning Agents Toolkit (ML-Agents) is an open-source project that enables games and simulations to serve as environments for training intelligent agents using deep reinforcement learning and imitation learning.
rl-baselines3-zoo - A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Autonomous-Ai-drone-scripts - State of the art autonomous navigation scripts using Ai, Computer Vision, Lidar and GPS to control an arducopter based quad copter.
dreamerv2 - Mastering Atari with Discrete World Models