dm_control
myosuite
dm_control | myosuite | |
---|---|---|
7 | 4 | |
3,549 | 765 | |
1.6% | 0.5% | |
7.5 | 9.2 | |
3 days ago | about 18 hours ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
dm_control
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
This includes single-agent Gymnasium wrappers for DM Control, DM Lab, Behavior Suite, Arcade Learning Environment, OpenAI Gym V21 & V26. Multi-agent PettingZoo wrappers support DM Control Soccer, OpenSpiel and Melting Pot. For more information, read the release notes here:
Have you ever wanted to use dm-control with stable-baselines3? Within Reinforcement learning (RL), a number of APIs are used to implement environments, with limited ability to convert between them. This makes training agents across different APIs highly difficult, and has resulted in a fractured ecosystem.
-
Installing & Using MuJoCo 2.1.5 with OpenAi Gym
Deepmind Control Suite is a good alternative to Open AI Gym for continuous control tasks. It contains many of the environments present in Gym and also a few extra ones. Deepmind Control Suite also uses Mujoco. I found the installation to be straightforward. Check out https://github.com/deepmind/dm_control
-
Is there a way to get PPO controlled agents to move a little more gracefully?
Do you know if this is implemented in code anywhere? I've been digging around in DeepMind's dm_control for the past few hours and I haven't found it. I'm not sure what I'm looking for either.
-
[D] MuJoCo vs PyBullet? (esp. for custom environment)
If you're interested in using Mujoco, I'd suggest checking out the dm_control package for Python bindings rather than interfacing with C++ directly. I think one downside to Mujoco currently is that you cannot dynamically add objects, and the entire simulation is initialized and loaded according to the MJCF / XML file.
- How to use MuJoCo from Python3
-
Any beginner resources for RL in Robotics?
DeepMind's dm control: https://github.com/deepmind/dm_control
myosuite
-
MyoSuite: An embodied AI platform that unifies neural and motor intelligence
MyoSuite: A contact-rich simulation suite for musculoskeletal motor control
-
Meta Researchers Introduce a New Embodied AI Platform, Called MyoSuite, That Applies Machine Learning (ML) to Biomechanical Control Problems by Unifying Motor and Neural Intelligence
Continue reading | Check out the paper, Github, blog and project
- GitHub - facebookresearch/myosuite: MyoSuite is a collection of environments/tasks to be solved by musculoskeletal models simulated with the MuJoCo physics engine and wrapped in the OpenAI gym API.
What are some alternatives?
gym - A toolkit for developing and comparing reinforcement learning algorithms.
MuJoCo_RL_UR5 - A MuJoCo/Gym environment for robot control using Reinforcement Learning. The task of agents in this environment is pixel-wise prediction of grasp success chances.
baselines - OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
DI-engine - OpenDILab Decision AI Engine
IsaacGymEnvs - Isaac Gym Reinforcement Learning Environments
Metaworld - Collections of robotics environments geared towards benchmarking multi-task and meta reinforcement learning
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
mujoco-py - MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.
Robotics Library (RL) - The Robotics Library (RL) is a self-contained C++ library for rigid body kinematics and dynamics, motion planning, and control.
acme - A library of reinforcement learning components and agents
dreamerv2 - Mastering Atari with Discrete World Models
crafter - Benchmarking the Spectrum of Agent Capabilities