Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality. Learn more →
Top 20 Python rl Projects
-
Project mention: Is it better to not use the Target Update Frequency in Double DQN or depends on the application? | /r/reinforcementlearning | 2023-07-05
The tianshou implementation I found at https://github.com/thu-ml/tianshou/blob/master/tianshou/policy/modelfree/dqn.py is DQN by default.
-
Project mention: Open source rules engine for Magic: The Gathering | news.ycombinator.com | 2023-12-14
I went looking for MuZero implementations in order to see how, exactly, they interact with the game space. Based on this one, which had the most stars in the muzero topic, it appears that it needs to be able to discern legal next steps from the current game state https://github.com/werner-duvaud/muzero-general/blob/master/...
So, I guess for the cards Forge has implemented one could MuZero it, but I believe it's a bit chicken and egg with a "free text" game like M:TG -- in order to train one would need to know legal steps for any random game state, but in order to have legal steps one would need to be able to read and interpret English rules and card text
-
InfluxDB
Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
-
rl-baselines3-zoo
A training framework for Stable Baselines3 reinforcement learning agents, with hyperparameter optimization and pre-trained agents included.
Project mention: Can't solve MountainCar-v0 with A2C algorithm (stable-baselines3) | /r/reinforcementlearning | 2023-06-27I'm trying to solve MountainCar-v0 enviroment from gymnasium with the A2C algorithm and the agent doesn't find a solution. I checked this so I added import stable_baselines3.common.sb2_compat.rmsprop_tf_like as RMSpropTFLike. Also checked the rl-baselines3-zoo for the hyperparameter tuning. So my code is:
-
rl-baselines-zoo
A collection of 100+ pre-trained RL agents using Stable Baselines, training and hyperparameter optimization included.
-
Project mention: How do I run this code from Papers in 100 lines of code? | /r/NeuralRadianceFields | 2023-09-22
I wanted to try the some code written by Maxime Vandegar https://github.com/MaximeVandegar/Papers-in-100-Lines-of-Code/tree/main/KiloNeRF_Speeding_up_Neural_Radiance_Fields_with_Thousands_of_Tiny_MLPs
-
Project mention: Show HN: A gallery of dev tool marketing examples | news.ycombinator.com | 2023-10-07
Hi I am Jakub. I run marketing at a dev tool startup https://neptune.ai/ and I share learnings on dev tool marketing on my blog https://www.developermarkepear.com/.
Whenever I'd start a new marketing project I found myself going over a list of 20+ companies I knew could have done something well to “copy-paste” their approach as a baseline (think Tailscale, DigitalOCean, Vercel, Algolia, CircleCi, Supabase, Posthog, Auth0).
So past year and a half, I’ve been screenshoting examples of how companies that are good at dev marketing do things like pricing, landing page design, ads, videos, blog conversion ideas. And for each example I added a note as to why I thought it was good.
Now, it is ~140 examples organized by tags so you can browse all or get stuff for a particular topic.
Hope it is helpful to some dev tool founders and marketers in here.
wdyt?
Also, I am always looking for new companies/marketing ideas to add to this, so if you’d like to share good examples I’d really appreciate it.
-
stable-baselines3-contrib
Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code
Project mention: Problem with Truncated Quantile Critics (TQC) and n-step learning algorithm. | /r/reinforcementlearning | 2023-12-09# https://github.com/Stable-Baselines-Team/stable-baselines3-contrib/blob/master/sb3_contrib/tqc/tqc.py :
-
WorkOS
The modern identity platform for B2B SaaS. The APIs are flexible and easy-to-use, supporting authentication, user identity, and complex enterprise features like SSO and SCIM provisioning.
-
skrl
Modular reinforcement learning library (on PyTorch and JAX) with support for NVIDIA Isaac Gym, Isaac Orbit and Omniverse Isaac Gym
-
learning-to-drive-in-5-minutes
Implementation of reinforcement learning approach to make a car learn to drive smoothly in minutes
-
stable-baselines
Mirror of Stable-Baselines: a fork of OpenAI Baselines, implementations of reinforcement learning algorithms (by Stable-Baselines-Team)
-
-
pytorch-learn-reinforcement-learning
A collection of various RL algorithms like policy gradients, DQN and PPO. The goal of this repo will be to make it a go-to resource for learning about RL. How to visualize, debug and solve RL problems. I've additionally included playground.py for learning more about OpenAI gym, etc.
-
-
Stochastic-muzero
Pytorch Implementation of Stochastic MuZero for gym environment. This algorithm is capable of supporting a wide range of action and observation spaces, including both discrete and continuous variations.
-
Note
Easily implement parallel training and distributed training. Machine learning library. Note.neuralnetwork.tf package include Llama2, CLIP, ViT, ConvNeXt, SwiftFormer, etc, these models built with Note are compatible with TensorFlow and can be trained with TensorFlow. (by NoteDance)
-
-
-
Muzero-unplugged
Pytorch Implementation of MuZero Unplugged for gym environment. This algorithm is capable of supporting a wide range of action and observation spaces, including both discrete and continuous variations.
https://github.com/DHDev0/Muzero-unplugged
Gym is now gymnasium and it has support for additional Environments like Mujoco:
-
Muzero
Pytorch Implementation of MuZero for gym environment. It support any Discrete , Box and Box2D configuration for the action space and observation space. (by DHDev0)
-
SaaSHub
SaaSHub - Software Alternatives and Reviews. SaaSHub helps you find the best software and product alternatives
Python rl related posts
- Can't solve MountainCar-v0 with A2C algorithm (stable-baselines3)
- Stable-Baselines3 v2.0: Gymnasium Support
- Understanding Action Masking in RLlib
- Fast and hackable frameworks for RL research
- [Q]Official seed_rl repo is archived.. any alternative seed_rl style drl repo??
- RL review
- Agent trains great with PPO but terrible with SAC --> Advice for Hyperparameters
-
A note from our sponsor - InfluxDB
www.influxdata.com | 17 Apr 2024
Index
What are some of the best open-source rl projects in Python? This list will help you:
Project | Stars | |
---|---|---|
1 | tianshou | 7,356 |
2 | muzero-general | 2,372 |
3 | rl-baselines3-zoo | 1,764 |
4 | rl-baselines-zoo | 1,106 |
5 | Papers-in-100-Lines-of-Code | 564 |
6 | neptune-client | 526 |
7 | stable-baselines3-contrib | 421 |
8 | skrl | 391 |
9 | learning-to-drive-in-5-minutes | 277 |
10 | stable-baselines | 277 |
11 | Pytorch-PCGrad | 265 |
12 | pytorch-learn-reinforcement-learning | 139 |
13 | DeepBeerInventory-RL | 72 |
14 | Stochastic-muzero | 40 |
15 | Note | 33 |
16 | robot-gym | 28 |
17 | RayEnvWrapper | 22 |
18 | Muzero-unplugged | 17 |
19 | Muzero | 14 |
20 | encode-attend-navigate-pytorch | 7 |