acme
dm_control
Our great sponsors
acme | dm_control | |
---|---|---|
11 | 7 | |
3,373 | 3,540 | |
1.4% | 2.5% | |
6.0 | 7.5 | |
2 days ago | 6 days ago | |
Python | Python | |
Apache License 2.0 | Apache License 2.0 |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
acme
-
Fast and hackable frameworks for RL research
I'm tired of having my 200m frames of Atari take 5 days to run with dopamine, so I'm looking for another framework to use. I haven't been able to find one that's fast and hackable, preferably distributed or with vectorized environments. Anybody have suggestions? seed-rl seems promising but is archived (and in TF2). sample-factory seems super fast but to the best of my knowledge doesn't work with replay buffers. I've been trying to get acme working but documentation is sparse and many of the features are broken.
-
How much of a MuJoCo simulation or real life robot can you train on a 3090?
I'm training a few algorithms from Deepmind's acme library on some MuJoCo models and I'm wondering how long this will take to train and what it's going to do to my electric bill. Is a 3090 or two enough to train something to keep its balance, or do a task, or do I need to wait for the 8090 to come out?
-
Recomendations of framework/library for MARL
Recently dm-acme also added support for multi-agent environments. Acme: https://github.com/deepmind/acme
- Have you used any good DRL library?
- Is there a way to get PPO controlled agents to move a little more gracefully?
-
Worthwhile to convert custom env to be dm_env compatible?
Can anyone speak to their experience using acme (https://github.com/deepmind/acme) and by extension dm_env (https://github.com/deepmind/dm_env)? I'm wondering if it would be worthwhile for me to invest the time into converting my custom environment (which loosely follows the standard RL setup) over to this format.
-
[D] Physics and Reinforcement Learning - Discussion of Deepmind's work
acme/acme/agents/tf/mpo at master · deepmind/acme · GitHub
- Applied resources in Pytorch?
-
deepmind acme compatible with windows?
after installing it in a clean env, I tried to run the example provided for solving the gym cartpole env: https://github.com/deepmind/acme/blob/master/examples/control/run_d4pg_gym.py
-
Spec for RL agent implementation?
Acme has a slightly different one: https://github.com/deepmind/acme which includes specs for agents, buffers etc. It is very general. You can see their component description here: https://github.com/deepmind/acme/blob/master/docs/components.md
dm_control
-
Shimmy 1.0: Gymnasium & PettingZoo bindings for popular external RL environments
This includes single-agent Gymnasium wrappers for DM Control, DM Lab, Behavior Suite, Arcade Learning Environment, OpenAI Gym V21 & V26. Multi-agent PettingZoo wrappers support DM Control Soccer, OpenSpiel and Melting Pot. For more information, read the release notes here:
Have you ever wanted to use dm-control with stable-baselines3? Within Reinforcement learning (RL), a number of APIs are used to implement environments, with limited ability to convert between them. This makes training agents across different APIs highly difficult, and has resulted in a fractured ecosystem.
-
Installing & Using MuJoCo 2.1.5 with OpenAi Gym
Deepmind Control Suite is a good alternative to Open AI Gym for continuous control tasks. It contains many of the environments present in Gym and also a few extra ones. Deepmind Control Suite also uses Mujoco. I found the installation to be straightforward. Check out https://github.com/deepmind/dm_control
-
Is there a way to get PPO controlled agents to move a little more gracefully?
Do you know if this is implemented in code anywhere? I've been digging around in DeepMind's dm_control for the past few hours and I haven't found it. I'm not sure what I'm looking for either.
-
[D] MuJoCo vs PyBullet? (esp. for custom environment)
If you're interested in using Mujoco, I'd suggest checking out the dm_control package for Python bindings rather than interfacing with C++ directly. I think one downside to Mujoco currently is that you cannot dynamically add objects, and the entire simulation is initialized and loaded according to the MJCF / XML file.
- How to use MuJoCo from Python3
-
Any beginner resources for RL in Robotics?
DeepMind's dm control: https://github.com/deepmind/dm_control
What are some alternatives?
dm_env - A Python interface for reinforcement learning environments
gym - A toolkit for developing and comparing reinforcement learning algorithms.
Mava - 🦁 A research-friendly codebase for fast experimentation of multi-agent reinforcement learning in JAX
baselines - OpenAI Baselines: high-quality implementations of reinforcement learning algorithms
MPO - Pytorch implementation of "Maximum a Posteriori Policy Optimization" with Retrace for Discrete gym environments
IsaacGymEnvs - Isaac Gym Reinforcement Learning Environments
tonic - Tonic RL library
pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).
selfhosted-apps-docker - Guide by Example
mujoco-py - MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.
Robotics Library (RL) - The Robotics Library (RL) is a self-contained C++ library for rigid body kinematics and dynamics, motion planning, and control.