multiagent-particle-envs
maddpg
Our great sponsors
multiagent-particle-envs | maddpg | |
---|---|---|
6 | 2 | |
2,188 | 1,521 | |
3.6% | 4.2% | |
0.0 | 0.0 | |
20 days ago | 27 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
multiagent-particle-envs
- Why is Q-learning always presented in such a math-heavy fashion? I just spent an hour dissecting this formula with a student -- only to strongly suspect there is a typo. Are there any good Q-Learning tutorials out there that *explain* the math instead of dropping it from the sky?
-
Ideal size of the visual observation
Hi, I am using the MPE (https://github.com/openai/multiagent-particle-envs) and I'm planning to use a visual observation. I was wondering, what size should it be? I assume that if it is too large and the agents are only a few, I am wasting lots of compute for nothing and also the noise becomes a lot. But how to find the best size? 60x60x3 for example?
-
Why does MADDPG use action log prob for Q (Critic) instead of sampled action?
Code for https://arxiv.org/abs/1706.02275 found: https://github.com/openai/multiagent-particle-envs
-
I get an incredibly long error simply for trying to install an older version of Numpy
git clone https://github.com/openai/multiagent-particle-envs cd multiagent-particle-envs/ python -m venv maddpg echo env/ >> .gitignore .\maddpg\Scripts\activate pip install gym==0.10.5 pip install numpy==1.14.5
-
Not as impressive as the one natural simulator that I made with visual programming languages Aka the future but this is good try for someone learning programming
If you wanted to make it an RL env that others could train, it might be a nicer looking version of https://github.com/openai/multiagent-particle-envs
maddpg
-
How is the backward pass performed in MADDPG algorithm from MARL
I'm using the MADDPG algorithm from https://github.com/openai/maddpg/blob/master/maddpg/trainer/maddpg.py. I understood the forward pass for both the actor and critic networks. I'm not able to understand how the actor and critic networks are updates. Like at line 188 and 191 the authors compute the critic loss and actor loss. But can anyone explain how the critic and actor networks are updated. Also, as far as I understand, when the number of agents increases from 3 to 6 for a simple spread policy in MADDPG, the computation time for Q loss and P loss at lines 188 and 191 increase super-linearly. I'm assuming this might be because both the Q loss and P loss utilize the Q values and the dimension to calculate the Q values increases with the number of increasing linearly. It would be great if anyone can help me to understand this back propagation phase much better and why does the computation time grow super-linearly. I also put a time counter to track the computation time of Q loss and P loss for 60,000 episodes with simple spread policy (3 agents, 3 landmarks, 0 adversaries). Thanks for the help, in advance! **Q loss** 3 agents 74.31 sec 6 agents 243.31 sec (3X) **P loss** 3 agents 114.86 sec 6 agents 321.76 sec (3x)
-
How to get my multi-agents more collaborative?
Another thing is that I don't use only one centralized critic, I'm using one for each agent (they are all centralized), you could use parameter sharing for the ones of the same type if you want. A great start would be to look at how the MADDPG works in an implementation (original, tf2 ,pytorch-1 , pytorch-2 ), then you can see how it is the training of the actor and the critic and just adapt the ideas to your MA-PPO implementation.
What are some alternatives?
rpg_timelens - Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
ALAE - [CVPR2020] Adversarial Latent Autoencoders
pymarl - Python Multi-Agent Reinforcement Learning framework
qlib - Qlib is an AI-oriented quantitative investment platform that aims to realize the potential, empower research, and create value using AI technologies in quantitative investment, from exploring ideas to implementing productions. Qlib supports diverse machine learning modeling paradigms. including supervised learning, market dynamics modeling, and RL.
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
transferlearning - Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习