AMSR / conferences_raw /iclr20 /ICLR.cc_2020_Conference_B1gZV1HYvS.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
30.2 kB
{"forum": "B1gZV1HYvS", "submission_url": "https://openreview.net/forum?id=B1gZV1HYvS", "submission_content": {"authorids": ["minghuanliu@sjtu.edu.cn", "mingak@sjtu.edu.cn", "wnzhang@sjtu.edu.cn", "zhuangyuzheng@huawei.com", "w.j@huawei.com", "liuwulong@huawei.com", "yyu@apex.sjtu.edu.cn"], "title": "Multi-Agent Interactions Modeling with Correlated Policies", "authors": ["Minghuan Liu", "Ming Zhou", "Weinan Zhang", "Yuzheng Zhuang", "Jun Wang", "Wulong Liu", "Yong Yu"], "pdf": "/pdf/beceb7094a1f342270ab228d4e4b63d666a385cb.pdf", "TL;DR": "Modeling complex multi-agent interactions under multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents\u2019 policies. ", "abstract": "In multi-agent systems, complex interacting behaviors arise due to the high correlations among agents. However, previous work on modeling multi-agent interactions from demonstrations is primarily constrained by assuming the independence among policies and their reward structures. \nIn this paper, we cast the multi-agent interactions modeling problem into a multi-agent imitation learning framework with explicit modeling of correlated policies by approximating opponents\u2019 policies, which can recover agents' policies that can regenerate similar interactions. Consequently, we develop a Decentralized Adversarial Imitation Learning algorithm with Correlated policies (CoDAIL), which allows for decentralized training and execution. Various experiments demonstrate that CoDAIL can better regenerate complex interactions close to the demonstrators and outperforms state-of-the-art multi-agent imitation learning methods. Our code is available at \\url{https://github.com/apexrl/CoDAIL}.", "code": "https://github.com/apexrl/CoDAIL", "keywords": ["Multi-agent reinforcement learning", "Imitation learning"], "paperhash": "liu|multiagent_interactions_modeling_with_correlated_policies", "_bibtex": "@inproceedings{\nLiu2020Multi-Agent,\ntitle={Multi-Agent Interactions Modeling with Correlated Policies},\nauthor={Minghuan Liu and Ming Zhou and Weinan Zhang and Yuzheng Zhuang and Jun Wang and Wulong Liu and Yong Yu},\nbooktitle={International Conference on Learning Representations},\nyear={2020},\nurl={https://openreview.net/forum?id=B1gZV1HYvS}\n}", "full_presentation_video": "", "original_pdf": "/attachment/7a4d6e773e8d60398a8586f82e72dc69fdeebc05.pdf", "appendix": "", "poster": "", "spotlight_video": "", "slides": ""}, "submission_cdate": 1569439528852, "submission_tcdate": 1569439528852, "submission_tmdate": 1583912034297, "submission_ddate": null, "review_id": ["Hyeeki6nKB", "B1gXtGEb5S", "rJezMXKS5r"], "review_url": ["https://openreview.net/forum?id=B1gZV1HYvS&noteId=Hyeeki6nKB", "https://openreview.net/forum?id=B1gZV1HYvS&noteId=B1gXtGEb5S", "https://openreview.net/forum?id=B1gZV1HYvS&noteId=rJezMXKS5r"], "review_cdate": [1571769047982, 1572057723200, 1572340489943], "review_tcdate": [1571769047982, 1572057723200, 1572340489943], "review_tmdate": [1574266091235, 1573787152171, 1572972441966], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper1643/AnonReviewer2"], ["ICLR.cc/2020/Conference/Paper1643/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper1643/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1gZV1HYvS", "B1gZV1HYvS", "B1gZV1HYvS"], "review_content": [{"experience_assessment": "I have published one or two papers in this area.", "rating": "8: Accept", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "title": "Official Blind Review #2", "review": "In this work, a multi-agent imitation learning algorithm with opponent modeling is proposed, where each agent considers other agents\u2019 expected actions in advance and uses them to generate their own actions. Assuming each agent can observe other agents\u2019 actions, which is a reasonable assumption in MARL problems, a decentralized algorithm called CoDAIL is proposed. For each iteration of CoDAIL, (1) each agent trains opponent models (other agents\u2019 policies) by minimizing either MSE loss (continuous actions) or CE loss (discrete actions), (2) samples actions from those opponent models, (3) updates individual rewards (discriminators) and critics and (4) updates policies with multi-agent extention of ACKTR (which is used in MA-GAIL and MA-AIRL as well).\n\nThe experiments in the submission show that there is a significant gain relative to baselines (MA-GAIL and MA-AIRL) in OpenAI Multiagent Particle Environments (MPE) in terms of (true) reward differences and KL divergence between agents\u2019 and experts\u2019 state distributions.\n\nI think the empirical contribution of this work is clear to be accepted, but I give Weak Accept due to the following comments:\n\n- I think there\u2019s a similarity between Theorem 6 in MA-GAIL paper and Proposition 1 in the submission. I hope the difference between Proposition 1 and Theorem 6 to be clarified. \n\n- Proposition 2 seems to me redundant because it\u2019s neither important for theoretical analysis in 3.3 nor for the experiments. I believe a few sentences are enough to describe why authors choose \\alpha=1 (or equivalent explanations).\n\n- The authors suppose fully observable Markov Games in the paper, but it makes me confused when I consider the experiments in the submission. For example in Cooperative Navigation, each agent\u2019s observation includes (1) position vector relative to agents and landmarks and (2) their own velocities (which cannot be observed by other agents directly). Since authors argue CoDAIL is a decentralized algorithm, I think agents are not allowed to use others\u2019 observation for opponent modeling, but it seems that agents fully utilize others\u2019 observations. I hope it to be clarified and if that\u2019s the case, I wonder if we can regard CoDAIL as a decentralized method. \n\nI\u2019m willing to increase my score if my questions are clearly answered. ", "review_assessment:_checking_correctness_of_derivations_and_theory": "I carefully checked the derivations and theory."}, {"experience_assessment": "I have published in this field for several years.", "rating": "6: Weak Accept", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "title": "Official Blind Review #3", "review": "The authors propose a decentralized adversarial imitation learning algorithm with correlated policies, which recovers each agent\u2019s policy through approximating opponents action using opponent modeling. Extensive experimental results showed that the proposed framework, CoDAIL, better fits scenarios with correlated multi-agent policies.\n\nGenerally, the paper follows the idea of GAIL and MAGAIL. Differing from the previous works, the paper introduces \\epsilon-Nash equilibrium as the solution to multi-agent imitation learning in Markov games. It shows that using the concept of \\epsilon-Nash equilibrium as constraints is consistent and equivalent to adding the difference of the causal entropy of the expert policy and the causal entropy of a possible policy in RL procedure. It makes sense. \n\nBelow, I have a few concerns to the current status of the paper.\n\n1.\tThe authors propose \\epsilon-Nash equilibrium to model the convergent state in multi-agent scenarios, however, in section 3.1 the objective function of MA-RL (Equation 5) is still the discounted causal entropy of policy, the same as that of MA-GAIL paper. It is unclear how the \\epsilon-NE is considered in modeling MA-RL problem.\n\n2.\tRather than assuming conditional independence of actions from different agents, the authors considered that the joint policy as a correlated policy conditioned on state and all opponents\u2019 actions. With the new assumption, the paper re-defines the occupancy measure and introduces an approach to approximate the unobservable opponents\u2019 policies, in order to access opponents\u2019 actions. However, in the section 3.2 when discussing the opponents modeling, the paper did not clearly explain how the joint opponent function \\sigma^{(i)} is designed. The description \\sigma^{(i)} is confusing.\n\n3.\tTypos: in equation 14 \u201ci\u201d or \u201c-i\u201d; appendix algorithm 1 line 3 \u201cpi\u201d or \u201c\\pi\u201d. \n", "review_assessment:_checking_correctness_of_derivations_and_theory": "I carefully checked the derivations and theory."}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "6: Weak Accept", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #1", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "This paper proposes to model interactions in a multi-agent system by considering correlated policies. In order to do so, the work modifies the GAIL framework to derive a learning objective. Similar to GAIL, the discriminator distinguishes between state, action, next state sequences but crucially the actions here are considered for all agents.\n\nThe paper is a natural extension of GAIL/MA-GAIL. I have two major points that need to be addressed.\n\n1. The exposition and significance of some of the theoretical results is unclear.\n- The non-correlated and correlated eqns in 2nd and 3rd line in eq. 8 are not equivalent in general, yet connected via an equality.\n In particular, Proposition 2 considers an importance weighting procedure to reweight state, action, next state triplets. It is unclear how this resolves the shortcomings of pi_E^{-1} being inaccessible. Prop 2 shifts from pi_E^{-1} to pi^{-1} and hence, the expectations in Prop 2 and Eq. 11 are not equivalent. \n- More importantly, how are the importance weights estimated in Eq. 12? The numerator requires pi_E^{-1}, which is not accessible. If the numerator and denominator are estimated separately, it becomes a chicken-and-egg problem since the denominator is itself intended to be an imitating the expert policy appearing in the numerator?\n\n2. Missing related work\nThere is a huge body of missing work in multi-agent interactions modeling and generative modeling. [1, 2] consider modeling of agent interactions via imitation learning and a principled evaluation framework of generalization in the Markov games setting. By sharing parameters, they are also able to model correlations across agent policies and have strong results on generalization to cooperation/competition with unseen agents with similar policies (which wouldn't have been possible if correlations were not modeled). Similarly, [3, 4] are other similar works which consider modeling of other agent interactions/diverse behaviors via imitation style approaches. Finally, the idea of correcting for the mismatch in state, action, next state triplets in Proposition 2 has been considered for model-based off-policy evaluation in [5]. They proposed a likelihood-free method to estimate importance weights, which seems might be necessary for this task as well (re: qs. on how are importance weights estimated?).\n\nRe:experiments. Results look good and convincing for most parts. I don't see much value of the qualitative evaluation in Figure 1. If the KL divergence is low, we can expect the marginals to be better estimated. Trying out various levels of generalization as proposed in [2] would significantly strengthen the paper.\n\nTypos\nsec 2.1 Transition dynamics should have range in R+\nProof of Prop 2. \\mu instead of u\n\nReferences:\n[1] Learning Policy Representations in Multiagent Systems. ICML 2018.\n[2] Evaluating Generalization in Multiagent Systems using Agent-Interaction Graphs. AAMAS 2018.\n[3] Machine Theory of Mind. ICML 2018.\n[4] Robust imitation of diverse behaviors. NeurIPS 2017.\n[5] Bias Correction of Learned Generative Models using Likelihood-free Importance Weighting. NeurIPS 2019."}], "comment_id": ["r1ggNn9Zir", "Hyx3Roc-sB", "HkgZL35bsS", "ryxN9TcbiH", "HylafpcWir", "Hkgn995bjr"], "comment_cdate": [1573133351617, 1573133268212, 1573133384960, 1573133707661, 1573133588666, 1573132948204], "comment_tcdate": [1573133351617, 1573133268212, 1573133384960, 1573133707661, 1573133588666, 1573132948204], "comment_tmdate": [1573396883393, 1573396651324, 1573314951330, 1573141120055, 1573140978673, 1573139738653], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper1643/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1643/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1643/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1643/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1643/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1643/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "** Response to \"Missing related work.\" (2/3)", "comment": "\nQ3: About \"missing work in multi-agent interactions\"\nResponse: \nThanks for the helpful suggestions. We have included some of those works of interactions modeling along with other opponent modeling papers to make it more clear in our latest version. In fact, we've read most of these works, yet we did not include them as they aim to address different problems. \n\nAs we have formulated the problem of modeling multi-agent interactions from demonstrations as an imitation learning problem, we pay more attention to multi-agent imitation learning works as our comparable methods and the most related ones.\n\nBelow we discuss each paper you mentioned in detail to clarify the differences between them and ours. Such discussions are also added to the related work of our latest version.\n\n1 - [1] is the long paper of [2], which is an appealing work for modeling the among-agents interaction relationships as policy representations. Their problem setting has several important different points against us.\n\n1.a - First, we focus on different tasks. They aim to learn the **representations function** of agent policies \"based on their interactions\", that is, to learn a \"policies feature abstraction\" with the latent relationships among agents rather than imitating their policies from demonstrations to regenerate similar interacted data with correlated policies. Their learned policy embedding function is able to characterize agent behaviors and can be used in kinds downstream tasks, which all take the policy embeddings as a core part, making it tough for us to try those generalization tasks since we only recover agents' policies.\n\n1.b - Second, we consider different \"comprehension\" about interactions among agents. We care about the distribution of the overall interacted data sampled from correlated policies and how we can regenerating similar interacted data instead of analyzing the latent relationships among agents. Specifically, [1,2] regard interactions as the episodes that contain only k (in the paper they use 2 agents), which constructs an agent-interaction graph. That is, they focus on the latent relationships among agents.\n\n1.c - Third, in [1,2], imitation learning is just a tool or technique to lean the policy embedding, which, by contrast, is the entire problem that we focus on.\n\n1.d - Last but not least, parameter sharing is different from \"correlated policy\". Parameter sharing treats each agent as an independent individual to generalize the single-agent learning method in a multi-agent setting, which does not, in essence, consider the property of Markov Games and complicated \"reasoning\" policy. On the contrary, \"correlated policy\" means that each agent can infer about the others which explicitly considers opponents' policy in their decisionmaking process. See more details in [7,8,9]. In our setting, we want to model interactions considering such correlated policy structures, which is our motivation.\n\n2 - The diverse behaviors of single-agent shown in [3] are different from the correlated interactions in a multi-agent setting. The main difference is that in single-agent setting one does not have to reason about the others, thus the generated trajectories are only related with the agent's own policy, which could be influenced by all agents in a multi-agent setting, and that's why the generated trajectories of all agents can be viewed as \"interactions\".\n\n3 - [4] is a good work to model such a reasoning-like policy of agents, but they focus on MARL settings that interact with environments and learn policies with reward signals instead of an imitation learning setting that learning from pure demonstrations without reward signals (our task). Imitation learning is also just a technique to make use of the past trajectories of other agents. However, in future work, we can extend our work with their \"theory of mind\" policy structures to model complicated interactions.\n\n4 - [5] cannot exactly solve the important weight problem. See details in our response to Q2.\n\nReferences:\n[7] Probabilistic recursive reasoning for multi-Agent reinforcement learning. Y Wen, Y Yang, R Luo, J Wang, W Pan. ICLR 2019.\n[8] A regularized opponent model with maximum entropy objective. Z Tian, Y Wen, Z Gong, F Punakkath, S Zou, J Wang. IJCAI 2019.\n[9] Opponent modeling in multi-agent systems. D Carmel, S Markovitch. IJCAI 1995."}, {"title": "** Response to Mathematics Details (1/3)", "comment": "We sincerely thank the reviewer for the constructive comments.\n\nQ1: About \"non-correlated and correlated eqs in 2nd and 3rd line in eq.8 are not equivalent yet connected via equality.\"\nResponse:\nWe are sorry for the confusion. By that we mean the joint policy can be decomposed into two different assumptions (either 2nd or 3rd line), we have revised the expression of Eq. (8) in our latest version.\n\nQ2: About \"importance weight\".\nResponse:\nThe main challenge to estimating the weight exactly is to estimate the (s, a) distributions of demonstrators' trajectories. Notice that the demonstrations are always insufficient for a low-variance estimation and it costs much to update such density estimations during training. In fact, we did have tried with KDE (kernel density estimation) to compute an \"exact\" importance weight but the results were not good. Thus we refer to [6] for a simple solution and in our paper, we have presented that \"we fix $\\alpha = 1$ in our implementation, and as the experimental results have shown, it has no significant influences on performance. Besides, a similar approach can be found in Kostrikov et al. (2018).\" in the paragraph below Eq. (12).\n\nReference:\n[6] Discriminator-Actor-Critic: Addressing Sample Inefficiency and Reward Bias in Adversarial Imitation Learning. I Kostrikov, KK Agrawal, D Dwibedi, S Levine. ICLR 2019."}, {"title": "** Response to Experiments (3/3)", "comment": "\n\nQ4. About not \"much value of the qualitative evaluation in Figure 1\"\nResponse:\nFigure 1 is a visualization of the distribution of trajectories of learned methods. As we can see in Figure 1(b) & 1(e), trajectories with similar KL-Divergence to the demonstrator trajectories do not necessarily have similar distribution patterns. This can be more clear in Figure 4(b) & 4(e). We show that our methods successfully generate *distribution-similar* trajectories against demonstrators more than just *KL-Divergence-better* methods.\n\nQ5. About \"various levels of generalization\"\nResponse:\nWe agree with your suggestion that it is better to consider more different-level evaluations. However, as shown in response 1.a of Q3, it is hard to straightly extend in [1,2]'s experimental settings for different tasks. And the major difficulty is that we learn the policy directly with no such a module as \"policy embeddings\" to achieve those downstream tasks."}, {"title": "Response", "comment": "We sincerely thank you for your comprehensive comments on our paper and we carefully answer each of your questions as below.\n\nQ1. About Proposition 1\nResponse: \nTheorem 6 of MA-GAIL paper and Proposition 1 of our paper appear to be similar though, they are technically different. MA-GAIL starts from a NE solution concept, while our deviation focuses on the objective of each agent, which makes the Proposition 1 in our paper more general and can be easily extended to different kinds of policy structures. Thus, only when every agent cares only the state without considering the others (independent, non-correlated policy structure), Proposition 1 of our paper collapses into Theorem 6 of MA-GAIL paper, where we feel free to add the objective of each agent in Proposition 1 as the total objective in Theorem 6.\nBesides, due to the \"strict\" NE setting, MA-GAIL must ignore the entropy item, which is not required in our $\\epsilon$-NE setting. Thus, MA-GAIL can be regarded as a special case of our CoDAIL. \n\nQ2. About Proposition 2\nResponse: \nProposition 2 is a kind of simple and redundant. However, we feel it will make the paper more clear, because this is not a normally used importance sampling ratio of *occupancy measures* and one may be confused about the importance weight that whether it should be the ratio of two policies instead of two occupancy measures. Here we list it as an extra proposition to emphasize the technique of occupancy measure importance sampling. \n\nQ3. About the State/Observation Setting\nResponse: \nYes. This does exist and it also makes us confused when we read MA-GAIL and MA-AIRL papers, which care less about the partially observable property of Particle World environments. However, we show our understanding to interpret this problem as below.\n\n(1) First, either they or we do not consider to describe a PO setting because all of us wish to simplify the methodology and concentrate on the imitation learning architecture without caring about the partially observable settings.\n\n(2) Second, in most single-agent RL tasks, the normal inputs of agent policies are observations instead of states, e.g. the raw pixels of Atari games. However, in deep reinforcement learning (DRL), we can always map those observations into a low-dimensional latent state representation achieved by low-level layers of deep neural networks to achieve the function of state inference from observations in POMDPs, thus we usually care less about the observation/state in normal DRL tasks.\n\n(3) Thus we think that not only the previously mentioned works but many other MARL works who take Particle World as a MARL benchmark all stand at this point since the observations of Particle World contain comprehensive information to infer the latent states.\n\n(4) Since the other two works mainly conduct experiments on these Particle environments, at least we need to show the performance against baseline methods on Particle environments."}, {"title": "Response", "comment": "We truly appreciate your helpful feedback. \n\nQ1. About \"$\\epsilon$-NE\"\nResponse:\nIt is worth noting that we have shown our theoretical analysis in Sec 3.3 that the objective of our method in our derivation essentially equivalent to reaching an $\\epsilon$-NE. However, this does not mean that we must begin with an $\\epsilon$-NE solution: as shown in Eq. (6), we start by letting each agent achieve its own inverse RL objective to learn the expert policy $\\pi$. \n\nTo clarify the difference with MA-GAIL, our method theoretically shows a different perspective with MA-GAIL, which starts from a NE solution and a corresponding dual problem with ignoring the entropy term and does not solve the proposed dual problem. On the contrary, we show the multi-agent generative imitation learning problem (or multi-agent inverse reinforcement learning problem) can be seen to reach an $\\epsilon$-NE solution concept, without limited in independent non-correlated policies and overlooking the entropy item. Thus, MA-GAIL can be regarded as a special case of our CoDAIL when ignoring the policy correlation among agents and the entropy item. \n\nQ2. About Opponents Modeling\nResponse: \nWe are sorry for the unclarity. In fact, the learning process of the joint opponent function $\\sigma^{(i)}$ follows a normal way of opponent modeling. \n\n(1) Specifically, we construct a function $\\sigma^{(i)}(a^{(-i)} | s): \\mathcal{S} \\times \\mathcal{A}^{(1)} \\times \\cdots \\times \\mathcal{A}^{(i-1)} \\times \\mathcal{A}^{(i+1)} \\times \\cdots \\times \\mathcal{A}^{(N)}\\rightarrow {[0, 1]}^{N-1}$, as the approximation of opponents for each agent $i$. \n\n(2) Appendix B shows that in implementation: \"Specifically for opponents models, we utilize a multi-head-structure network, where each head predicts each opponent's action separately, and we get the overall opponents joint action $a^{(-i)}$ by concatenating all actions.\".\n\n(3) As Reviewer #2 says and shown in Eq. (17), opponent models are trained by minimizing either MSE loss (continuous actions) or CE loss (discrete actions).\n\nWe have revised this part for clarity in the latest version of our paper. \n\nIn sum, we think that at the current stage we have thoroughly answered your proposed questions, and according to your helpful suggestions, we have revised related parts for clarity in the latest version of our paper. Thus we sincerely wish you can re-consider and improve your rating for this work."}, {"title": "Overall Response - Motivations & Contributions", "comment": "We thank all reviewers for the valuable comments on improving the quality of this work and we would like to clarify our motivation and contributions:\n \n1 - Motivation: In the real world, agents make decisions by constantly predicting and reasoning correlated intelligent agents' behaviors. We model such behaviors as correlated policy structure. Like in a driving scenario, a human driver makes decisions based on predicting and inducing the surrounding conditions that consisted of varies traffic participants; in a soccer game, a player would reason the next move of both his teammates and opponents before kick/moving decision.\nIn this paper, we aim to model the interactions among agents, by which we seek to perform high-fidelity simulation of the multi-agent environment with regenerating similar trajectories by imitating their correlated policies from demonstration data. However, traditional imitation methods such as GAIL, MA-GAIL and MA-AIRL lack the ability to model interactions from demonstrations sampled from these correlated policies.\n\n2 - Contributions: \n(1) We consider regenerating interacted trajectory data with recovered correlated policies, which is expected to follow a similar distribution with that from experts. \n(2) We firstly propose to consider the influence of opponents in multi-agent imitation learning, in result showing the ability to learn from experts with correlated policies. With opponents modeling, our proposed framework CoDAIL gains the properties of decentralized-training and decentralized-execution. \n(3) We show a different perspective that the multi-agent generative imitation learning problem (or multi-agent inverse reinforcement learning problem) can be seen to converge to an $\\epsilon$-NE solution concept. Under our theoretical architecture, we start from the max-entropy inverse reinforcement learning objective of each agent while MA-GAIL paper derives from a NE solution and a corresponding dual problem. In result, MA-GAIL can be regarded as a special case of our CoDAIL when ignoring the policy correlation among agents. \n\nAccording to your constructive comments, we have revised the equation symbols and discussions, fixed the typos and added more related works from different areas in our latest version paper, by which we think most confusions have been removed."}], "comment_replyto": ["rJezMXKS5r", "rJezMXKS5r", "rJezMXKS5r", "Hyeeki6nKB", "B1gXtGEb5S", "B1gZV1HYvS"], "comment_url": ["https://openreview.net/forum?id=B1gZV1HYvS&noteId=r1ggNn9Zir", "https://openreview.net/forum?id=B1gZV1HYvS&noteId=Hyx3Roc-sB", "https://openreview.net/forum?id=B1gZV1HYvS&noteId=HkgZL35bsS", "https://openreview.net/forum?id=B1gZV1HYvS&noteId=ryxN9TcbiH", "https://openreview.net/forum?id=B1gZV1HYvS&noteId=HylafpcWir", "https://openreview.net/forum?id=B1gZV1HYvS&noteId=Hkgn995bjr"], "meta_review_cdate": 1576798728643, "meta_review_tcdate": 1576798728643, "meta_review_tmdate": 1576800907898, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The paper proposes an extension to the popular Generative Adversarial Imitation Learning framework that considers multi-agent settings with \"correlated policies\", i.e., where agents' actions influence each other. The proposed approach learns opponent models to consider possible opponent actions during learning. Several questions were raised during the review phase, including clarifying questions about key components of the proposed approach and theoretical contributions, as well as concerns about related work. These were addressed by the authors and the reviewers are satisfied that the resulting paper provides a valuable contribution. I encourage the authors to continue to use the reviewers' feedback to improve the clarity of their manuscript in time for the camera ready submission.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1gZV1HYvS&noteId=hlMH9OLx7"], "decision": "Accept (Poster)"}