maddpg
transferlearning
maddpg | transferlearning | |
---|---|---|
2 | 1 | |
1,524 | 12,867 | |
1.8% | - | |
0.0 | 7.8 | |
about 1 month ago | 7 days ago | |
Python | Python | |
MIT License | MIT License |
Stars - the number of stars that a project has on GitHub. Growth - month over month growth in stars.
Activity is a relative number indicating how actively a project is being developed. Recent commits have higher weight than older ones.
For example, an activity of 9.0 indicates that a project is amongst the top 10% of the most actively developed projects that we are tracking.
maddpg
-
How is the backward pass performed in MADDPG algorithm from MARL
I'm using the MADDPG algorithm from https://github.com/openai/maddpg/blob/master/maddpg/trainer/maddpg.py. I understood the forward pass for both the actor and critic networks. I'm not able to understand how the actor and critic networks are updates. Like at line 188 and 191 the authors compute the critic loss and actor loss. But can anyone explain how the critic and actor networks are updated. Also, as far as I understand, when the number of agents increases from 3 to 6 for a simple spread policy in MADDPG, the computation time for Q loss and P loss at lines 188 and 191 increase super-linearly. I'm assuming this might be because both the Q loss and P loss utilize the Q values and the dimension to calculate the Q values increases with the number of increasing linearly. It would be great if anyone can help me to understand this back propagation phase much better and why does the computation time grow super-linearly. I also put a time counter to track the computation time of Q loss and P loss for 60,000 episodes with simple spread policy (3 agents, 3 landmarks, 0 adversaries). Thanks for the help, in advance! **Q loss** 3 agents 74.31 sec 6 agents 243.31 sec (3X) **P loss** 3 agents 114.86 sec 6 agents 321.76 sec (3x)
-
How to get my multi-agents more collaborative?
Another thing is that I don't use only one centralized critic, I'm using one for each agent (they are all centralized), you could use parameter sharing for the ones of the same type if you want. A great start would be to look at how the MADDPG works in an implementation (original, tf2 ,pytorch-1 , pytorch-2 ), then you can see how it is the training of the actor and the critic and just adapt the ideas to your MA-PPO implementation.
transferlearning
-
[D] Medium Article: Adaptive Learning for Time Series Forecasting
The src is available in https://github.com/jindongwang/transferlearning I'll also publish about how to code the model for time series
What are some alternatives?
Ray - Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.
zshot - Zero and Few shot named entity & relationships recognition
pymarl - Python Multi-Agent Reinforcement Learning framework
stackoverflow-better-stats - Better statistics about Stack Overflow's 2023 Developer Survey
multiagent-particle-envs - Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"
PaddleHelix - Bio-Computing Platform Featuring Large-Scale Representation Learning and Multi-Task Deep Learning “螺旋桨”生物计算工具集
gpt-2 - Code for the paper "Language Models are Unsupervised Multitask Learners"
awesome-artificial-intelligence-research - A curated list of Artificial Intelligence (AI) Research, tracks the cutting edge trending of AI research, including recommender systems, computer vision, machine learning, etc.
TS-TCC - [IJCAI-21] "Time-Series Representation Learning via Temporal and Contextual Contrasting"
Transfer-Learning-Library - Transfer Learning Library for Domain Adaptation, Task Adaptation, and Domain Generalization
Efficient-VDVAE - Official Pytorch and JAX implementation of "Efficient-VDVAE: Less is more"
FSL-Mate - FSL-Mate: A collection of resources for few-shot learning (FSL).