gym-md
MiniDungeons for OpenAI Gym (by ganyariya)
HighwayEnv
A minimalist environment for decision-making in autonomous driving (by Farama-Foundation)
gym-md | HighwayEnv | |
---|---|---|
1 | 3 | |
2 | 2,385 | |
- | 2.0% | |
0.0 | 7.5 | |
almost 2 years ago | 8 days ago | |
Python | Python | |
MIT License | MIT License |
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
gym-md
Posts with mentions or reviews of gym-md.
We have used some of these posts to build our list of alternatives
and similar projects.
-
Publish Gym Environment as PIP Package?
After more research I stumbled onto this repository: https://github.com/ganyariya/gym-md
HighwayEnv
Posts with mentions or reviews of HighwayEnv.
We have used some of these posts to build our list of alternatives
and similar projects. The last one was on 2022-02-19.
-
Looking for a a tutorial/blog post/ codebase/ anything that deals with highway-env possibly the racetrack variant) with a DQN
More or less what the title says. I have already tried this https://github.com/Farama-Foundation/HighwayEnv/blob/master/scripts/sb3_racetracks_ppo.py, using a dqn from sb3 instead of the ppo but the results weren't good, i'm open to any suggestion.
-
RecurrentPPO (SB3-contrib) learning for autonomous driving
Hi everyone! I'm a complete newbie to DRL, so please forgive my lack of understanding of some things on here. I'm training a recPPO from SB3-contrib on E.Leurent's Highway env [https://github.com/eleurent/highway-env] (I customized the action to be more high-level). During training I get the desired behavioural outcome from the agent but I noticed that some training metrics of the model seem quite off respect to the trend found online (especially the explained variance). I just wanted an opinion from some more navigated fellas in here! Can I somehow fix this trend by hyperparameter tuning or do I have e.g. to modify the reward function somehow? How can I improve the training? For any details I'm always available. I share the tensorboard plots obtained for RecPPO.
-
Low Graphics Consuming Simulators for Self-Driving Cars
This is an excellent one https://github.com/eleurent/highway-env
What are some alternatives?
When comparing gym-md and HighwayEnv you can also consider the following projects:
MuJoCo_RL_UR5 - A MuJoCo/Gym environment for robot control using Reinforcement Learning. The task of agents in this environment is pixel-wise prediction of grasp success chances.
gym-pybullet-drones - PyBullet Gymnasium environments for single and multi-agent reinforcement learning of quadcopter control
deepdrive-zero - Top down 2D self-driving car simulator built for running experiments in minutes, not weeks
rocket-league-gym - A Gym-like environment for Reinforcement Learning in Rocket League
PythonRobotics - Python sample codes for robotics algorithms.
multirotor - Multicopter UAV simulation for control/RL experiments.