{"forum": "B1grSREtDH", "submission_url": "https://openreview.net/forum?id=B1grSREtDH", "submission_content": {"title": "Bayesian Residual Policy Optimization: Scalable Bayesian Reinforcement Learning with Clairvoyant Experts", "authors": ["Gilwoo Lee", "Brian Hou", "Sanjiban Choudhury", "Siddhartha S. Srinivasa"], "authorids": ["gilwoo@cs.uw.edu", "bhou@cs.uw.edu", "sanjibac@cs.uw.edu", "siddh@cs.uw.edu"], "keywords": ["Bayesian Residual Reinforcement Learning", "Residual Reinforcement Learning", "Bayes Policy Optimization"], "TL;DR": "We propose a scalable Bayesian Reinforcement Learning algorithm that learns a Bayesian correction over an ensemble of clairvoyant experts to solve problems with complex latent rewards and dynamics.", "abstract": "Informed and robust decision making in the face of uncertainty is critical for robots that perform physical tasks alongside people. We formulate this as a Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs). While Bayes-optimality is theoretically the gold standard, existing algorithms do not scale well to continuous state and action spaces. We propose a scalable solution that builds on the following insight: in the absence of uncertainty, each latent MDP is easier to solve. We split the challenge into two simpler components. First, we obtain an ensemble of clairvoyant experts and fuse their advice to compute a baseline policy. Second, we train a Bayesian residual policy to improve upon the ensemble's recommendation and learn to reduce uncertainty. Our algorithm, Bayesian Residual Policy Optimization (BRPO), imports the scalability of policy gradient methods as well as the initialization from prior models. BRPO significantly improves the ensemble of experts and drastically outperforms existing adaptive RL methods.", "pdf": "/pdf/a35222bc94e14dc8182f65c205e9510d93db2814.pdf", "paperhash": "lee|bayesian_residual_policy_optimization_scalable_bayesian_reinforcement_learning_with_clairvoyant_experts", "original_pdf": "/attachment/294dd2384baf58f10569c8f0dc85011730c7aef2.pdf", "_bibtex": "@misc{\nlee2020bayesian,\ntitle={Bayesian Residual Policy Optimization: Scalable Bayesian Reinforcement Learning with Clairvoyant Experts},\nauthor={Gilwoo Lee and Brian Hou and Sanjiban Choudhury and Siddhartha S. Srinivasa},\nyear={2020},\nurl={https://openreview.net/forum?id=B1grSREtDH}\n}"}, "submission_cdate": 1569439292717, "submission_tcdate": 1569439292717, "submission_tmdate": 1577168277273, "submission_ddate": null, "review_id": ["BylbDBJaYH", "BygdF6XLqH", "HJxPLs_qcr"], "review_url": ["https://openreview.net/forum?id=B1grSREtDH¬eId=BylbDBJaYH", "https://openreview.net/forum?id=B1grSREtDH¬eId=BygdF6XLqH", "https://openreview.net/forum?id=B1grSREtDH¬eId=HJxPLs_qcr"], "review_cdate": [1571775833303, 1572384128416, 1572666191316], "review_tcdate": [1571775833303, 1572384128416, 1572666191316], "review_tmdate": [1574543618936, 1572972511655, 1572972511611], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper1109/AnonReviewer1"], ["ICLR.cc/2020/Conference/Paper1109/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper1109/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1grSREtDH", "B1grSREtDH", "B1grSREtDH"], "review_content": [{"experience_assessment": "I have published one or two papers in this area.", "rating": "3: Weak Reject", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "title": "Official Blind Review #1", "review": "This paper considers Bayesian Reinforcement Learning problem over latent Markov Decision Processes (MDPs). The authors consider making decisions with experts, where each expert performs well under some latent MDPs. An ensemble of experts is constructed, and then a Bayesian residual policy is learned to balance exploration-exploitation tradeoff. Experiments on Maze and Door show the advantages of residual policy learning over some baselines.\n\n1. The Bayesian Reinforcement Learning problem this work considered is important. However, using experts immediately make the problem much easier. The original Bayesian Reinforcement Learning problem is then reduced to making decision with experts. Under this setting, there are many existing work with respect to exploration-exploitation tradeoff (OFU, Thompson Sampling) with theoretical guarantees. I did not see why using this residual policy learning (although as mentioned residual/boosting is useful under other settings) is reasonable here. There is not theoretical support showing that residual learning enjoys guaranteed performance. The motivation of introducing this heuristic is not clear.\n\n2. The comparisons with UPMLE and BPO seems not convincing. Both BPO and UPMLE do not use experts, and ensemble of experts outperforms them as shown in the experiments. And the ensemble baseline here is kind of weak (why sensing with probability 0.5 at each timestep?) Always 0.5 does not make sense (exploration should decrease as uncertainty reduced). Other exploration methods should be compared, to empirically show the advantages/necessities of residual policy learning.\n\nOverall, I consider the proposed BRPO a simple extension of BPO, with a heuristic of learning ensemble policy to make decisions. BRPO is lack of theoretical support, and it is not clear why residual policy learning here is necessary and what exactly the advantage is over other exploration methods. Comparisons with simple baseline like exploration with constant probability is not enough to justify the proposed method.\n\n=====Update=====\nThanks for the rebuttal. The comparison with PSRL improves the paper. However, I still think this paper needs more improvement as follows.\nTheorem 1 looks hasty to me. Batch policy optimization Alg is going to solve n_{sample} MDPs, which are generated from P_0. But Eq. (6) or Theorem 1 does not contain information about P_0, implying that P_0 has no impact, which is questionable (an uniform P_0 that can generate different MDPs and a deterministic P_0 can only generate one MDP should be very different). I suggest the authors do more detailed analysis.\nOn the other hand, I expected whether this special \"residual action\" heuristic has any guarantees in RL? Can decomposing action into a_r + a_e provide us a better exploration method (than others like PSRL, OFU...)? Since this is the main idea of this paper as an extension of BPO, I think this point is important. The experiments shows that it can work in some cases, but I do not see an explanation (the \"residual learning\" paragraph is high level and I do not get an insight from that.).", "review_assessment:_checking_correctness_of_derivations_and_theory": "I carefully checked the derivations and theory."}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "3: Weak Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #3", "review_assessment:_checking_correctness_of_derivations_and_theory": "N/A", "review": "In this paper, the authors motivate and propose a learning algorithm, called Bayesian Residual Policy Optimization (BRPO), for Bayesian reinforcement learning problems. Experiment results are demonstrated in Section 5.\n\nThe paper is well written in general, and the proposed algorithm is also interesting. However, I think the paper suffers from the following limitations:\n\n1) This paper does not have any theoretical analysis or justification. It would be much better if the authors can rigorously prove the advantages of BRPO under some simplifying assumptions.\n\n2) It would be better if the authors can provide more experiment results, like experiment results in more games."}, {"rating": "6: Weak Accept", "experience_assessment": "I have read many papers in this area.", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "title": "Official Blind Review #2", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review": "The paper presents a Bayesian residual policy which improves a ensemble of expert policies by learning to reduce uncertainty. The algorithm is designed for reducing uncertainty due to the occluded objects and uncertainty about tasks. It is verified on two problems, cheese finding and door findiing, and compared with several different baselines. \n\nThe idea of the paper is good and Algorithm 1 sets out to learn the exploration policy when the expert policies do not agree. The exposition and writing are clear. The experiments are details and convey that the proposed method outperforms the baselines.\n\nThat said, the formulation of the task is a bit unusual and too specific, making me wonder if the method works for other tasks. Some questions to clarify the task formulation:\n1. Do agent start locations and cheese locations change during the training and evaluation? The figures suggest they remain the same, in which case the generality is limited.\n\n2. When an agent senses for cheese, does it receive orientation or only the distance? If it receives the distances, will that not be a signal that matches the goals with some noise. In other words, why does the agent to sense several times at the beginning to associate which expert policy should be active, and then follow that policy.\n\n3. Why and how was the reward for the cheese finding task determined? It seems very specific.\n\n4. I would be helpful to provide some intuition about \\psi\n\nOverall an interesting paper, but not sure how well it would perform on a wider set of tasks."}], "comment_id": ["BJl0Yoo2jr", "S1xDuso3or", "B1l0Vos3iS", "r1xvIqjnjH"], "comment_cdate": [1573858181942, 1573858159334, 1573858102414, 1573857870935], "comment_tcdate": [1573858181942, 1573858159334, 1573858102414, 1573857870935], "comment_tmdate": [1573858181942, 1573858159334, 1573858102414, 1573857870935], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper1109/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1109/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1109/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1109/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Reponse", "comment": "We\u2019d like to thank all reviewers for their thoughtful comments and feedback. We have updated our paper to address the questions and comments. Specific comments have been added to respond directly to each reviewer.\n"}, {"title": "Response", "comment": "Thank you for your thoughtful comments and feedback. We have updated our paper to address your questions and comments. Here\u2019s our summary.\n\n1. No theoretical support\nWe have updated our paper to highlight the theoretical contribution and to compare with other approaches that yield different theoretical guarantees. Please see the comments (1) and (3) under Reviewer 1 and the corresponding updated sections (Section 4.2, the Appendix).\n\n2. More experiments\nWe have added new experiments to compare against PSRL, as well as a more effective baseline ensemble policy as per Reviewer 1\u2019s comment. We have run another experiment to handle continuous latent parameters, as pointed out by Reviewer 2. Please check the added Appendix, as well as our response (2) to Reviewer 1 and response (1) to Reviewer 2.\n"}, {"title": "Reponse", "comment": "Thank you for your thoughtful comments and feedback. We have updated our paper to address your questions and comments. Here\u2019s our summary.\n\n1. [Handling continuous latent parameters] \nThe maze tasks in the submission consider only a finite number of latent goals. This is analogous to settings in which we have strong structural priors about where to search for the target, so the belief is represented as a categorical distribution over those candidates. However, our algorithm is not limited to such settings. We have run another experiment in which the goal is continuous and can be anywhere in the maze. The goal is tracked with an EKF, with mean and covariance. The expert recommendation is a motion planner path to the EKF\u2019s mean goal. The BRPO agent results in average performance of 467.2, significantly outperforming BPO (152.7) or UPMLE (124.3). However, in this case, the BRPO agent does not improve significantly above the ensemble policy, which we believe is because it tracks only a unimodal belief. We plan to include experiments with multimodal belief representations, such as Gaussian Mixture Models, in our final submission.\n\n2. The agent only receives noisy L2 distance to the goal. We intentionally constrained the observation because otherwise the goal becomes too obvious. This is a common setup in similar tasks (e.g. LightDark) explored in previous work [1].\n\n3. The problem setup for Maze tasks is motivated by classical POMDP problems in discrete state-action spaces, such as Tiger [2] and RockSample [3]. Much like Maze tasks, these tasks also have a high reward for correct goal and a high penalty for incorrect goals, and low sensing costs. RockSample requires long-horizon navigation to get to the goals, which we also adopted.\n\n\n4. \u03c6 is a latent parameter that drives the transition or reward functions. For a robot system, that could be uncertain parameters such as friction coefficients or joint damping parameters, or the mass of unknown objects being manipulated. For the Maze problems, it corresponds to the true location of the goal (cheese).\n\n[1] Robert Platt, Russ Tedrake, Leslie Pack Kaelbling, and Tomas Lozano-Perez. Belief space planning assuming maximum likelihood observations. In Robotics: Science and Systems, 2010.\n[2] Leslie Pack Kaelbling, Michael Littman, and Anthony Cassandra. Planning and acting in partially observable stochastic domains. Artificial Intelligence, 101(1-2):99\u2013134, 1998.\n[3] Trey Smith and Reid Simmons. Heuristic search value iteration for pomdps. In Proceedings of the 20th conference on Uncertainty in artificial intelligence, pages 520\u2013527. AUAI Press, 2004.\n"}, {"title": "Response ", "comment": "Thank you for your thoughtful comments and feedback. We have updated our paper to address your questions and comments. Here\u2019s our summary.\n\n1. What is the advantage of BRPO over Thompson Sampling (PSRL)?\t\n\nWe have added Section 3.1 and Appendices 6.1 and 6.3 to discuss the distinction. Those sections are repeated below for convenience.\n\nPosterior Sampling Reinforcement Learning (PSRL) [1] is an online RL algorithm that maintains a posterior over latent MDP parameters \u03c6. However, the problem setting it considers and how it uses this posterior are quite different than what we consider in this paper.\t\n\nIn this work, we are focused on zero-shot scenarios where the agent can only interact with the test MDP for a single episode; latent parameters are resampled for each episode. The PSRL regret analysis assumes MDPs with finite horizons and repeated episodes with the same test MDP, i.e. the latent parameters are fixed for all episodes.\n\t\t\t\t\t\nBefore each episode, PSRL samples an MDP from its posterior over MDPs, computes the optimal policy for the sampled MDP, and executes it on the fixed test MDP. Its posterior is updated after each episode, concentrating the distribution around the true latent parameters. During this exploration period, it can perform arbitrarily poorly (see Section 6.1 of the appendix for a concrete example). Furthermore, sampling a latent MDP from the posterior determinizes the parameters; as a result, there is no uncertainty in the sampled MDP, and the resulting optimal policies that are executed will never take sensing actions. In this work, we have focused on Bayesian RL problems where sensing is critical to performance. BRPO, like other Bayesian RL algorithms, focuses on learning the Bayes-optimal policy during training, which can be used at test time to immediately explore and exploit in a new environment. \n\t\t\t\t\nWe have run additional experiments to compare with PSRL. To make it handle the zero-shot scenario, we made PRSL sample from the posterior at every timestep and execute the corresponding optimal expert (aware of the penalties) . Since PSRL does not induce sensing, vanilla PSRL agent results in -124.4 \u00b1 11.3, as it suffers from a large number of penalties for reaching incorrect goals. When we run PSRL with sensing probability 0.5, this results in 464.1 \u00b1 5.5, with task completion rate of 94%. The task incompletion comes from belief occasionally not collapsing to one target. In comparison, our agent achieves an average return of 465.65 \u00b1 4.35, completing 100% of the episodes, even with suboptimal experts unaware of the penalties.\n\n2. Why use baseline with sensing probability 0.5? Other exploration methods should be compared.\t\t\nWe have added Appendix 6.2 to include results with a different exploration method, and duplicated those results here for convenience.\n\nThe ensemble we considered in Section 5 randomly senses with probability 0.5. A more effective sensing ensemble baseline policy could be designed manually, and used as the initial policy for the BRPO agent to improve on. Designing such a policy can be challenging: it requires either task-specific knowledge, or solving an approximate Bayesian RL problem. We bypass these requirements by using BRPO.\n\t\t\t\t\t\nOn the Maze10 environment, we have found via offline tuning that a more effective ensemble baseline agent senses only for the first 150 of 750 timesteps. The average return is 416.3 \u00b1 9.4, which outperforms the original ensemble baseline average return of 409.5 \u00b1 10.8. However, this is still lower than the BRPO agent that starts with that original ensemble, which accumulated an average return of 465.7 \u00b1 4.7. This trained BRPO agent also achieves a task completion rate of 100%, which is better than the 96.3% completed by the improved ensemble baseline. The performance gap comes from the suboptimality of the ensemble recommendation, as experts are unaware of the penalty for reaching incorrect goals. \n\t\t\t\t\n3. No theoretical support. \nWe show that the BRPO agent operates on its own MDP, which we refer to as Residual-MDP. Since this is an MDP, BRPO enjoys the theoretical guarantees provided by its underlying batch policy optimization algorithm. For example, if it runs TRPO, it inherits the same monotonic improvement guarantee.\n\nWe have updated Section 4.2 to clarify this.\n\n[1] Osband, Ian, Daniel Russo, and Benjamin Van Roy. \"(More) efficient reinforcement learning via posterior sampling.\" Advances in Neural Information Processing Systems. 2013."}], "comment_replyto": ["B1grSREtDH", "BygdF6XLqH", "HJxPLs_qcr", "BylbDBJaYH"], "comment_url": ["https://openreview.net/forum?id=B1grSREtDH¬eId=BJl0Yoo2jr", "https://openreview.net/forum?id=B1grSREtDH¬eId=S1xDuso3or", "https://openreview.net/forum?id=B1grSREtDH¬eId=B1l0Vos3iS", "https://openreview.net/forum?id=B1grSREtDH¬eId=r1xvIqjnjH"], "meta_review_cdate": 1576798714688, "meta_review_tcdate": 1576798714688, "meta_review_tmdate": 1576800921832, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper constitutes interesting progress on an important topic; the reviewers identify certain improvements and directions for future work (see in particular the updates from AnonReviewer1), and I urge the authors to continue to develop refinements and extensions.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1grSREtDH¬eId=HINRSzHIyS"], "decision": "Reject"}