{"forum": "B1VWtsA5tQ", "submission_url": "https://openreview.net/forum?id=B1VWtsA5tQ", "submission_content": {"title": "PPO-CMA: Proximal Policy Optimization with Covariance Matrix Adaptation", "abstract": "Proximal Policy Optimization (PPO) is a highly popular model-free reinforcement learning (RL) approach. However, in continuous state and actions spaces and a Gaussian policy -- common in computer animation and robotics -- PPO is prone to getting stuck in local optima. In this paper, we observe a tendency of PPO to prematurely shrink the exploration variance, which naturally leads to slow progress. Motivated by this, we borrow ideas from CMA-ES, a black-box optimization method designed for intelligent adaptive Gaussian exploration, to derive PPO-CMA, a novel proximal policy optimization approach that expands the exploration variance on objective function slopes and only shrinks the variance when close to the optimum. This is implemented by using separate neural networks for policy mean and variance and training the mean and variance in separate passes. Our experiments demonstrate a clear improvement over vanilla PPO in many difficult OpenAI Gym MuJoCo tasks.", "keywords": ["Continuous Control", "Reinforcement Learning", "Policy Optimization", "Policy Gradient", "Evolution Strategies", "CMA-ES", "PPO"], "authorids": ["perttu.hamalainen@aalto.fi", "amin.babadi@aalto.fi", "xiaoxiao.ma@aalto.fi", "jaakko.lehtinen@aalto.fi"], "authors": ["Perttu H\u00e4m\u00e4l\u00e4inen", "Amin Babadi", "Xiaoxiao Ma", "Jaakko Lehtinen"], "TL;DR": "We propose a new continuous control reinforcement learning method with a variance adaptation strategy inspired by the Covariance Matrix Adaptation Evolution Strategy (CMA-ES) optimization method", "pdf": "/pdf/1983aa91c43a6950500d99d246d3e67d64f4a684.pdf", "paperhash": "h\u00e4m\u00e4l\u00e4inen|ppocma_proximal_policy_optimization_with_covariance_matrix_adaptation", "_bibtex": "@misc{\nh\u00e4m\u00e4l\u00e4inen2019ppocma,\ntitle={{PPO}-{CMA}: Proximal Policy Optimization with Covariance Matrix Adaptation},\nauthor={Perttu H\u00e4m\u00e4l\u00e4inen and Amin Babadi and Xiaoxiao Ma and Jaakko Lehtinen},\nyear={2019},\nurl={https://openreview.net/forum?id=B1VWtsA5tQ},\n}"}, "submission_cdate": 1538087801486, "submission_tcdate": 1538087801486, "submission_tmdate": 1545355396856, "submission_ddate": null, "review_id": ["rkeNAG762Q", "r1lt40uc3m", "BkxLPtYt3Q"], "review_url": ["https://openreview.net/forum?id=B1VWtsA5tQ¬eId=rkeNAG762Q", "https://openreview.net/forum?id=B1VWtsA5tQ¬eId=r1lt40uc3m", "https://openreview.net/forum?id=B1VWtsA5tQ¬eId=BkxLPtYt3Q"], "review_cdate": [1541382859631, 1541209649382, 1541146974266], "review_tcdate": [1541382859631, 1541209649382, 1541146974266], "review_tmdate": [1541534009358, 1541534009017, 1541534008818], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1VWtsA5tQ", "B1VWtsA5tQ", "B1VWtsA5tQ"], "review_content": [{"title": "Review", "review": "This paper proposes an improvement of the PPO algorithm inspired by some components of the CMA-ES black-box optimization method. The authors evaluate the proposed method on a few Mujoco domains and compare it with PPO method using simpler exploration strategies. The results show that PPO-CMA less likely to getting stuck in local optima especially in Humanoid and Swimmer environments. \n\nMajor comments:\n\nThe reason that CMAES discards the worst batch of the solutions is that it cannot utilize the quality of the solutions, i.e., it treats every solution equally. But PPO/TRPO can surely be aware of the value of the advantage, and thus can learn to move away from the bad area. The motivation of remove the bad samples is thus not sound, as the model cannot be aware of the areas of the bad samples and can try to repetitively explore the bad area. \n\nPlease be aware that CMAES can get stuck in local optima as well. There is no general convergence guarantee of CMAES.\n\n\n\nDetailed comments:\n\n- In page 4, it is claimed that \"actions with a negative $A^\\pi$ may cause instability, especially when one considers training for several epochs at each iteration using the same data\" and demonstrate this with Figure 2. This is not rigorous. If you just reduce all the negative advantage value to zero and calculate its gradient, the method is similar to just use half of step-size in policy gradient. I speculate that if you halve the step-size in \"Policy Gradient\" setting, the results will be similar to the \"Policy Gradient(only positive advantages)\" setting. Furthermore, different from importance sampling technique, pruning all the negative advantage will lose much **useful** information to improve policy. So I think this is maybe not a perfect way to avoid instability although it works in experiments.\n\n\n- There have been a variety of techniques proposed to improve exploration based on derivative-free optimization method. But in my opinion, the way you combine with CMA-ES to improve exploration ability is not so reasonable. Except for the advantage function is change when policy is updated (which has mentioned in \"D LIMITATIONS\"), I consider that you do not make good use of the exploration feature in CMA-ES. The main reason that CMA-ES can explore better come from the randomness of parameter generation (line 2 in Algorithm 2). So it can generate more diverse policy than derivative-based approach. However, in PPO-CMA, you just replace it with the sampling of policy actions, which is not significant benefit to exploration. It more suitable to say that you \"design a novel way to optimize Gaussian policy with separate network for it mean and variance inspired by the CMA-ES method rather than \"provides a new link between RL and ES approaches to policy optimization\" (in page 10).\n\n- In experiments, there are still something not so clear:\n\n1. In Figure 5, I notice that the PPO algorithm you implemented improved and then drop down quickly in Humanoid-v2 and InvertedDoublePendulum-v2, which like due to too large step-size. Have you tried to reduce it? Or there are some other reasons leading to this phenomenon.\n\n2. What's the purpose of larger budget? You choose a bigger iteration budget than origin PPO implementation.\n\n3. What the experiments have observed may not due to the clipping of negative reward, but could due to the scaling down of the reward. Please try reward normalization.", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Very interesting paper on Covariance Matrix Adaption used to solve exploration/exploitation trade-offs in PPO", "review": "This is in my view a strong contribution to the field of policy gradient methods for RL in the context of continuous control. The method the authors proposed is dedicated to solving the premature convergence issue in PPO through the learning of variance control policy. The authors employ CMA-ES which is usually employed for adaptive Gaussian exploration. The method is simple and yet provides good results on a several benchmarks when compared to PPO.\n\nOne key insight developed in the paper consists in employing the advantage function as a means of filtering out samples that are associated with poorer rewards. Namely, negative advantage values imply that the corresponding samples are filtered out. Although with standard PPO such a filtering of samples leads to a premature shrinkage of the variance of the policy, CMA-ES increases the variance to enable exploration.\n\nA key technical point is concerned with the learning of the policy variance which is cleverly done, BEFORE updating the policy mean, by exploiting a window of historical rewards over H iterations. This enables an elegant and computationally cheap means of changing the variance for a specific state.\n\nSeveral experiments confirm that this method may be effective on different task when compared to PPO. Before concluding the authors carefully relate their work to prior research and delineate some limitations.\n\nStrengths:\n o) The paper is well written.\n o) The method introduced in the paper to learn how to explore is elegant, simple and seems robust.\n o) The paper combines educational analysis through a trivial example with more realistic examples which helps the reader understand the phenomenon helping the learning as well as its practical impact.\n\nWeaknesses:\n o) The experiments focus a lot on MuJuCo-1M. Although this task is compelling and difficult, more variety in experiments could help single out other applications where PPO-CMA helps find better control policies.\n\n", "rating": "9: Top 15% of accepted papers, strong accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Algorithm description is unclear", "review": "I have to say that this paper is not well organized. It describes the advantage function and CMA-ES, but it does not describe PPO and PPO-CMA very well. I goes through the paper twice, but I couldn't really get how the policy variance is adapted. Though the title of section 4 is \"PPO-CMA\", only the first paragraph is devoted to describe it and the others parts are brief introduction to CMA.\n\nThe problem of variance adaptation is not only for PPO. E.g., (Sehnke et al., Neural Networks 2009) is motivated to address this issue. They end up using directly updating the policy parameter by an algorithm like evolution strategy. In this line, algorithm of (Miyamae et al. NIPS 2010) is similar to CMA-ES. The authors might want to compare PPO-CMA with these algorithms as baselines.", "rating": "4: Ok but not good enough - rejection", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}], "comment_id": ["rye4JF-7JE", "HyehoGBf1N", "Hye2WFpZ0X", "BJllBd6WCm", "HJx-PPa-C7"], "comment_cdate": [1543866588300, 1543815844449, 1542736131901, 1542735928180, 1542735705054], "comment_tcdate": [1543866588300, 1543815844449, 1542736131901, 1542735928180, 1542735705054], "comment_tmdate": [1543866588300, 1543815844449, 1542793598450, 1542793580107, 1542735705054], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper422/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper422/AnonReviewer2", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper422/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper422/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper422/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Exploration in PPO and PPO-CMA", "comment": "A note on anonymity: please note that the commenter who started this thread is not an author and is not connected to this work. \n\n\u201cThe advantage of PPO-CMA may vanish when controlling the exploration of PPO better.\u201d\n\nIn PPO, tools for controlling the exploration are limited; PPO-CMA contributes a novel exploration approach. We stay true to the PPO core idea of keeping the updated policy in the proximity of the old policy, while allowing CMA-ES style exploration where the exploration variance grows in the progress direction, as illustrated in Figure 10. This can enable larger updates in subsequent iterations, illustrated in Figure 1. \n\nIn the original PPO, exploration is primarily adjusted by the epsilon parameter and the entropy loss weight. Larger epsilon values allow larger policy updates, but in practice the epsilon needs to be small to avoid similar instability as with Policy Gradient in figures 2 and 10. Larger entropy loss weight makes the algorithm prefer larger exploration variance, but too large values can easily cause worse results, as shown in figures 8 and 9. This makes finetuning the parameter tedious. As we discuss in the paper, it is also possible to design a predetermined variance annealing scheme for a specific task, as was done in the humanoid case of the original PPO paper. However, this is likely to require time-consuming trial-and-error iteration. Ideally, one would like to be able to use the same approach for all tasks.\n \nWe make no claims of PPO-CMA being the ultimate or only solution to improving exploration; in future work, it should be possible to combine PPO-CMA with other approaches such as intrinsic motivation. \n "}, {"title": "Irrelevant Notes", "comment": "The comment about the original CMAES was to point out that \"the motivation of remove the bad samples is thus not sound\" of this paper. This comment is then confirmed by the authors and helps improve the performance. \n\nAs for the CMAES, I didn't say \"any variant of CMAES\". As a heuristic search algorithm, it is certainly easy to incorporate more heuristics into CMAES. The question is then that, are the heuristics provably better? Unfortunately no answer.\n\nTherefore, the motivation is still not solid to me. I don't see a real difference between the exploration principles of CMAES and PPO. The advantage of PPO-CMA may vanish when controlling the exploration of PPO better."}, {"title": "Revisions made in response to the review", "comment": "Thanks for the review! We have now submitted a revised version of the paper. Below, we respond to the specific issues raised in the review.\n\n\nRelated work (Sehnke et al., Miyame et al.):\n\n- Sehnke et al. and Miyamae et al. optimize by directly sampling in the space of neural network weights (neuroevolution). In contrast, PPO-CMA, like almost all RL methods, samples in the space of agent actions, which is of much smaller dimension than the space of network weights (just dozens even for humanoids), and is therefore much more amenable to optimisation by sampling. PPO-CMA improves the sampling of actions in this class of algorithms. \n\n\nClarity of algorithm description:\n\n- We have revised the bullet point summary of PPO-CMA in the beginning of Section 4. It should now more directly highlight how CMA-ES features are implemented or approximated in PPO-CMA.\n\n- We have added Appendix D, which provides further visualization of the differences between algorithms.\n\n\n\n"}, {"title": "Revisions made in response to the review", "comment": "Thanks for the review! We have now submitted a revised version of the paper. Below, we respond to the specific issues raised in the review.\n\n\nDiscarding negative advantages:\n\n- We discovered a trivial modification that allows utilization of negative-advantage actions through a local linearity assumption, explained in section 4.3. The results have been added to Figure 5. The performance is considerably better in the humanoid case, and similar in the other tests.\n\n\nCMA-ES has no convergence guarantee:\n\n- We now clarify this in the 2nd paragraph of Section 3.3 \n\n\nIsn't using only positive advantages the same as halving the step size in Policy Gradient?\n\n- No. The new Appendix D and Figure 10 clarify the the difference between positive and negative advantages. The figure shows that even though each minibatch gradient step only causes a tiny update, Policy Gradient with negative advantages diverges after sufficiently many steps.\n\n\n\n\"In PPO-CMA, you just replace it with the sampling of policy actions, which is not significant benefit to exploration\": \n\n- Both CMA-ES and PPO-CMA explore similarly, i.e., parameter vectors or actions are sampled from a Gaussian distribution. The difference is that in PPO-CMA, the Gaussian is conditioned on the agent state. This is implemented using the policy network that outputs the state-dependent mean and variance, which are used to sample the actions. We contribute by showing how CMA-ES can be approximated with this type of neural network parameterization of the action-space exploration Gaussian. \n\n- Figure 1 visualizes how this can lead to better exploration than PPO. \n\n- The added visualization in Figure 10 (Appendix D) clarifies how the policy Gaussian adapts through the minibatch updates and how PPO-CMA differs from PPO and Policy Gradient.\n\n- A limitation of our work is that we only use diagonal covariance, which corresponds to the sep-CMA-ES variant instead of full CMA-ES. We now clarify this in the limitations section.\n\n\n\nA new link between RL and ES approaches?\n\n- We now clarify this in the conclusion (first bullet-point)\n\n\n\nPPO instability in Humanoid-V2 and InvertedDoublePendulum?\n\n- Although our implementation uses the same epsilon parameter as the original PPO, the original PPO implementation has multiple other regularizers that also play a role. These are discussed in Section B.1., which we have revised for clarity. \n\n- One explanation for the instability is that there is no guarantee that overshoots like those visualized for vanilla policy gradient in Appendix D are not possible in PPO when close to convergence, in particular for tasks that are very sensitive to small policy changes. Both the Humanoid and InvertedDoublePendulum are such tasks. \n\n- The instability is one of the reasons why we also include the OpenAI baseline PPO results using their implementation and default hyperparameters. This should allow a fair comparison. \n\n\n\nLarge iteration simulation budget (N)?\n\n- With our implementations, a large N seems to work better for both PPO and PPO-CMA. With a small N, the updates become very noisy. Although this could be counteracted by additional regularization, as discussed in Section B.1., we preferred to keep our implementations as simple as possible.\n\n\n\n\nReward normalization?\n\n- We use advantage normalization based on standard deviation\n\n- We initially also tried reward normalization, but this produced slightly worse results. \n\n"}, {"title": "Revised paper", "comment": "We thank all the reviewers for their comments! We've now uploaded a revised pdf. A summary of revisions is included below. \n\nNOTE: We will also respond to the reviews separately to answer questions and clarify how the paper revisions address the critique.\n\n\nREVISION SUMMARY:\n\n- Revised the bullet point algorithm summary in the beginning of Section 4. It should now better explain how CMA-ES features are implemented or approximated by PPO-CMA.\n\n- Clarified the conclusions, in particular how PPO-CMA provides a new connection between RL and ES approaches to policy optimization\n\n- Added Appendix D to better visualize the differences between algorithms and the different effect of positive and negative advantages.\n\n- Added the subsection 4.3 on PPO-CMA-m (a trivial modification that allows utilizing negative-advantage actions), in response to the comments of R2. PPO-CMA-m results are now included in Figure 5. The results are considerably better than PPO-CMA in Humanoid-v2, and similar in other tests.\n\n\n\n\n"}], "comment_replyto": ["HyehoGBf1N", "HkxIgoUlpm", "BkxLPtYt3Q", "rkeNAG762Q", "B1VWtsA5tQ"], "comment_url": ["https://openreview.net/forum?id=B1VWtsA5tQ¬eId=rye4JF-7JE", "https://openreview.net/forum?id=B1VWtsA5tQ¬eId=HyehoGBf1N", "https://openreview.net/forum?id=B1VWtsA5tQ¬eId=Hye2WFpZ0X", "https://openreview.net/forum?id=B1VWtsA5tQ¬eId=BJllBd6WCm", "https://openreview.net/forum?id=B1VWtsA5tQ¬eId=HJx-PPa-C7"], "meta_review_cdate": 1544742783193, "meta_review_tcdate": 1544742783193, "meta_review_tmdate": 1545354514975, "meta_review_ddate ": null, "meta_review_title": "Improvement needed", "meta_review_metareview": "This paper proposes to improve the exploration in the PPO algorithm by applying CMA-ES. Major concerns of the paper include: paper editing can be improved; the choices of baselines used in the paper may be not reasonable; flaws in comparisons with SOTA. It is also not quite clear why CMA can improve exploration, further justification required. Overall, this paper cannot be published in its current form.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper422/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1VWtsA5tQ¬eId=S1xDKvwxgE"], "decision": "Reject"}