dreamerv2 VS dm_control

Compare dreamerv2 vs dm_control and see what are their differences.

Our great sponsors
  • InfluxDB - Power Real-Time Data Analytics at Scale
  • WorkOS - The modern identity platform for B2B SaaS
  • SaaSHub - Software Alternatives and Reviews
dreamerv2 dm_control
4 7
853 3,540
- 2.5%
0.0 7.5
over 1 year ago 1 day ago
Python Python
MIT License Apache License 2.0
The number of mentions indicates the total number of mentions that we've tracked plus the number of user suggested alternatives.
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.

dreamerv2

Posts with mentions or reviews of dreamerv2. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-11-26.

dm_control

Posts with mentions or reviews of dm_control. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2023-04-25.

What are some alternatives?

When comparing dreamerv2 and dm_control you can also consider the following projects:

dreamerv3 - Mastering Diverse Domains through World Models

gym - A toolkit for developing and comparing reinforcement learning algorithms.

dreamer - Dream to Control: Learning Behaviors by Latent Imagination

baselines - OpenAI Baselines: high-quality implementations of reinforcement learning algorithms

panda-gym - Set of robotic environments based on PyBullet physics engine and gymnasium.

IsaacGymEnvs - Isaac Gym Reinforcement Learning Environments

stable-baselines3-contrib - Contrib package for Stable-Baselines3 - Experimental reinforcement learning (RL) code

pytorch-a2c-ppo-acktr-gail - PyTorch implementation of Advantage Actor Critic (A2C), Proximal Policy Optimization (PPO), Scalable trust-region method for deep reinforcement learning using Kronecker-factored approximation (ACKTR) and Generative Adversarial Imitation Learning (GAIL).

planet - Learning Latent Dynamics for Planning from Pixels

mujoco-py - MuJoCo is a physics engine for detailed, efficient rigid body simulations with contacts. mujoco-py allows using MuJoCo from Python 3.

orion - Asynchronous Distributed Hyperparameter Optimization.

Robotics Library (RL) - The Robotics Library (RL) is a self-contained C++ library for rigid body kinematics and dynamics, motion planning, and control.