MuJoCo_RL_UR5
HighwayEnv
MuJoCo_RL_UR5 | HighwayEnv | |
---|---|---|
1 | 3 | |
303 | 2,361 | |
- | 1.9% | |
0.0 | 7.5 | |
over 1 year ago | 8 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
MuJoCo_RL_UR5
-
Simulating robotic arm for object manipulation
If you want to use MuJoCo, you can check this repository out.
HighwayEnv
-
Looking for a a tutorial/blog post/ codebase/ anything that deals with highway-env possibly the racetrack variant) with a DQN
More or less what the title says. I have already tried this https://github.com/Farama-Foundation/HighwayEnv/blob/master/scripts/sb3_racetracks_ppo.py, using a dqn from sb3 instead of the ppo but the results weren't good, i'm open to any suggestion.
-
RecurrentPPO (SB3-contrib) learning for autonomous driving
Hi everyone! I'm a complete newbie to DRL, so please forgive my lack of understanding of some things on here. I'm training a recPPO from SB3-contrib on E.Leurent's Highway env [https://github.com/eleurent/highway-env] (I customized the action to be more high-level). During training I get the desired behavioural outcome from the agent but I noticed that some training metrics of the model seem quite off respect to the trend found online (especially the explained variance). I just wanted an opinion from some more navigated fellas in here! Can I somehow fix this trend by hyperparameter tuning or do I have e.g. to modify the reward function somehow? How can I improve the training? For any details I'm always available. I share the tensorboard plots obtained for RecPPO.
-
Low Graphics Consuming Simulators for Self-Driving Cars
This is an excellent one https://github.com/eleurent/highway-env
What are some alternatives?
IsaacGymEnvs - Isaac Gym Reinforcement Learning Environments
gym-pybullet-drones - PyBullet Gymnasium environments for single and multi-agent reinforcement learning of quadcopter control
Unity-Robotics-Hub - Central repository for tools, tutorials, resources, and documentation for robotics simulation in Unity.
deepdrive-zero - Top down 2D self-driving car simulator built for running experiments in minutes, not weeks
myosuite - MyoSuite is a collection of environments/tasks to be solved by musculoskeletal models simulated with the MuJoCo physics engine and wrapped in the OpenAI gym API.
gym-md - MiniDungeons for OpenAI Gym
robo-gym - An open source toolkit for Distributed Deep Reinforcement Learning on real and simulated robots.
PythonRobotics - Python sample codes for robotics algorithms.
Wave-Defense-Learning-Environment - A videogame made with PyGame turned into an Open AI Gym Learning Environment for Reinforcement Learning agents.
rocket-league-gym - A Gym-like environment for Reinforcement Learning in Rocket League
multirotor - Multicopter UAV simulation for control/RL experiments.