maddpg

Code for the MADDPG algorithm from the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments" (by openai)

Maddpg Alternatives

Similar projects and alternatives to maddpg based on common topics and language

  • Ray

    43 maddpg VS Ray

    Ray is a unified framework for scaling AI and Python applications. Ray consists of a core distributed runtime and a set of AI Libraries for accelerating ML workloads.

  • pymarl

    3 maddpg VS pymarl

    Python Multi-Agent Reinforcement Learning framework

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • multiagent-particle-envs

    Code for a multi-agent particle environment used in the paper "Multi-Agent Actor-Critic for Mixed Cooperative-Competitive Environments"

  • gpt-2

    64 maddpg VS gpt-2

    Code for the paper "Language Models are Unsupervised Multitask Learners"

  • transferlearning

    Transfer learning / domain adaptation / domain generalization / multi-task learning etc. Papers, codes, datasets, applications, tutorials.-迁移学习

  • ChatPaper

    2 maddpg VS ChatPaper

    Use ChatGPT to summarize the arXiv papers. 全流程加速科研,利用chatgpt进行论文全文总结+专业翻译+润色+审稿+审稿回复

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a better maddpg alternative or higher similarity.

maddpg reviews and mentions

Posts with mentions or reviews of maddpg. We have used some of these posts to build our list of alternatives and similar projects. The last one was on 2021-02-15.
  • How is the backward pass performed in MADDPG algorithm from MARL
    1 project | dev.to | 5 Oct 2022
    I'm using the MADDPG algorithm from https://github.com/openai/maddpg/blob/master/maddpg/trainer/maddpg.py. I understood the forward pass for both the actor and critic networks. I'm not able to understand how the actor and critic networks are updates. Like at line 188 and 191 the authors compute the critic loss and actor loss. But can anyone explain how the critic and actor networks are updated. Also, as far as I understand, when the number of agents increases from 3 to 6 for a simple spread policy in MADDPG, the computation time for Q loss and P loss at lines 188 and 191 increase super-linearly. I'm assuming this might be because both the Q loss and P loss utilize the Q values and the dimension to calculate the Q values increases with the number of increasing linearly. It would be great if anyone can help me to understand this back propagation phase much better and why does the computation time grow super-linearly. I also put a time counter to track the computation time of Q loss and P loss for 60,000 episodes with simple spread policy (3 agents, 3 landmarks, 0 adversaries). Thanks for the help, in advance! **Q loss** 3 agents 74.31 sec 6 agents 243.31 sec (3X) **P loss** 3 agents 114.86 sec 6 agents 321.76 sec (3x)
  • How to get my multi-agents more collaborative?
    3 projects | /r/reinforcementlearning | 15 Feb 2021
    Another thing is that I don't use only one centralized critic, I'm using one for each agent (they are all centralized), you could use parameter sharing for the ones of the same type if you want. A great start would be to look at how the MADDPG works in an implementation (original, tf2 ,pytorch-1 , pytorch-2 ), then you can see how it is the training of the actor and the critic and just adapt the ideas to your MA-PPO implementation.

Stats

Basic maddpg repo stats
2
1,524
0.0
about 1 month ago

openai/maddpg is an open source project licensed under MIT License which is an OSI approved license.

The primary programming language of maddpg is Python.


Sponsored
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com