{"forum": "B1MhpiRqFm", "submission_url": "https://openreview.net/forum?id=B1MhpiRqFm", "submission_content": {"title": "A Convergent Variant of the Boltzmann Softmax Operator in Reinforcement Learning", "abstract": "The Boltzmann softmax operator can trade-off well between exploration and exploitation according to current estimation in an exponential weighting scheme, which is a promising way to address the exploration-exploitation dilemma in reinforcement learning. Unfortunately, the Boltzmann softmax operator is not a non-expansion, which may lead to unstable or even divergent learning behavior when used in estimating the value function. The non-expansion is a vital and widely-used sufficient condition to guarantee the convergence of value iteration. However, how to characterize the effect of such non-expansive operators in value iteration remains an open problem. In this paper, we propose a new technique to analyze the error bound of value iteration with the the Boltzmann softmax operator. We then propose the dynamic Boltzmann softmax(DBS) operator to enable the convergence to the optimal value function in value iteration. We also present convergence rate analysis of the algorithm.\nUsing Q-learning as an application, we show that the DBS operator can be applied in a model-free reinforcement learning algorithm. Finally, we demonstrate the effectiveness of the DBS operator in a toy problem called GridWorld and a suite of Atari games. Experimental results show that outperforms DQN substantially in benchmark games.", "keywords": ["Reinforcement Learning", "Boltzmann Softmax Operator", "Value Function Estimation"], "authorids": ["v-lip@microsoft.com", "cqp14@mails.tsinghua.edu.cn", "v-qimeng@microsoft.com", "wche@microsoft.com", "tie-yan.liu@microsoft.com"], "authors": ["Ling Pan", "Qingpeng Cai", "Qi Meng", "Wei Chen", "Tie-Yan Liu"], "pdf": "/pdf/29e7e036064114b61e1574b91a9ed0a5f28f2dd1.pdf", "paperhash": "pan|a_convergent_variant_of_the_boltzmann_softmax_operator_in_reinforcement_learning", "_bibtex": "@misc{\npan2019a,\ntitle={A Convergent Variant of the Boltzmann Softmax Operator in Reinforcement Learning},\nauthor={Ling Pan and Qingpeng Cai and Qi Meng and Wei Chen and Tie-Yan Liu},\nyear={2019},\nurl={https://openreview.net/forum?id=B1MhpiRqFm},\n}"}, "submission_cdate": 1538087875904, "submission_tcdate": 1538087875904, "submission_tmdate": 1545355385541, "submission_ddate": null, "review_id": ["Hklc614o37", "SJlRd9sYnQ", "Syer4w_d2m"], "review_url": ["https://openreview.net/forum?id=B1MhpiRqFm¬eId=Hklc614o37", "https://openreview.net/forum?id=B1MhpiRqFm¬eId=SJlRd9sYnQ", "https://openreview.net/forum?id=B1MhpiRqFm¬eId=Syer4w_d2m"], "review_cdate": [1541255106110, 1541155445989, 1541076780736], "review_tcdate": [1541255106110, 1541155445989, 1541076780736], "review_tmdate": [1543259629446, 1541533646844, 1541533646636], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1MhpiRqFm", "B1MhpiRqFm", "B1MhpiRqFm"], "review_content": [{"title": "Okay paper but relatively thin novelty", "review": "Summary: This work demonstrates that, although the Boltzmann softmax operator is not a non-expansion, a proposed dynamic Boltzmann operator (DBS) can be used in conjunction with value iteration and Q-learning to achieve convergence to V* and Q*, respectively. This time-varying operator replaces the traditional max operator. The authors show empirical performance gains of DBS+Q-learning over Q-learning in a gridworld and DBS+DQN over DQN on Atari games.\n\nNovelty: (1) The error bound of value iteration with the Boltzmann softmax operator and convergence & convergence rate results in this setting seem novel. (2) The novelty of the dynamic Boltzmann operator is somewhat thin, as (Singh et al. 2000) show that a dynamic weighting of the Boltzmann operator achieves convergence to the optimal value function in SARSA(0). In that work, the weighting is state-dependent, so the main algorithmic novelty in this paper is removing the dependence on state visitation for the beta parameter by making it solely dependent on time. A question for the authors: How does the proof in this work relate to / differ from the convergence proofs in (Singh et al. 2000)?\n\nClarity: In the DBS Q-learning algorithm, it is unclear under which policy actions are selected, e.g. using epsilon-greedy/epsilon-Boltzmann versus using the Boltzmann distribution applied to the Q(s, a) values. If the Boltzmann distribution is used then the algorithm that is presented is in fact expected SARSA and not Q-learning. The paper would benefit from making this clear.\n\nSoundness: (1) The proof of Theorem 4 implicitly assumes that all states are visited infinitely often, which is not necessarily true with the given algorithm (if the policy used to select actions is the Boltzmann policy). (2) The proof of Theorem 1 uses the fact that |L(Q) - max(Q)| <= log(|A|) / beta, which is not immediately clear from the result cited in McKay (2003). (3) The paper claims in the introduction that \u201cthe non-expansive property is vital to guarantee \u2026 the convergence of the learning algorithm.\u201d This is not necessarily the case -- see Bellemare et al., Increasing the Action Gap: New Operators for Reinforcement Learning, 2016. \n\nQuality: (1) I appreciate that the authors evaluated their method on the suite of 49 Atari games. This said, the increase in median performance is relatively small, the delta being about half that of the increase due to double DQN. The improvement in mean score in great part stems from a large improvement occurs on Atlantis.\n\nThere are also a number of experimental details that are missing. Is the only change from DQN the change in update rule, while keeping the epsilon-greedy rule? In this case, I find a disconnect between the stated goal (to trade off exploration and exploitation) and the results. Why would we expect the Boltzmann softmax to work better when combined to epsilon-greedy? If not, can you give more details e.g. how beta was annealed over time, etc.?\n\nFinally, can you briefly compare your algorithm to the temperature scheduling method described in Fox et al., Taming the Noise in Reinforcement Learning via Soft Updates, 2016?\n\nAdditional Comments:\n(1) It would be helpful to have Atari results provided in raw game scores in addition to the human-normalized scores (Figure 5). (2) The human normalized scores listed in Figure 5 for DQN are different than the ones listed in the Double DQN paper (Van Hasselt et al, 2016). (3) For the DBS-DQN algorithm, the authors set beta_t = ct^2 - how is the value of c determined? (4) Text in legends and axes of Figure 1 and Figure 2 plots is very small. (5) Typo: citation for MacKay - Information Theory, Inference and Learning Algorithms - author name listed twice.\n\nSimilarly, if the main contribution is DBS, it would be interesting to have a more in-depth empirical analysis of the method -- how does performance (in Atari or otherwise) vary with the temperature schedule, how exploration is affected, etc.?\n\nAfter reading the other reviews and responses, I still think the paper needs further improvement before it can published.", "rating": "4: Ok but not good enough - rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "I don't think the theoretical results represent a significant advance", "review": "The writing and organization of the paper are clear. Theorem 1 seems fine but is straightforward to anyone who has studied this topic and knows the literature. Corollary one may be technically wrong (or at least it doesn't follow from the theorem), though this can be fixed by replacing the lim with a limsup. Theorem 4 seems to be the main result all the work is leading up to, but I think this is wrong. Stronger conditions are required on the sequence \\beta_t, along the lines discussed in the paragraph on Boltzmann exploration in Section 2.2 of Singh et al 2000. The proof provided by the authors relies on a \"Lemma 2\" which I can't find in the paper. The computational results are potentially interesting but call for further scrutiny. Given the issues with the theoretical results, I think its hard to justify accepting the paper.", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Boltzmann Weighting Done Right in Reinforcement Learning", "review": "I liked this paper overall, though I feel that the way it is pitched to the reader is misguided. The looseness with which this paper uses 'exploration-exploitation tradeoff' is worrying. This paper does not attack that tradeoff at all really, since the tradeoff in RL concerns exploitation of understood knowledge vs deep-directed exploration, rather than just annealing between the max action and the mean over all actions (which does not incorporate any notion of uncertainty). Though I do recognize that the field overall is loose in this respect, I do think this paper needs to rewrite its claims significantly. In fact it can be shown that Boltzmann exploration that incorporates a particular annealing schedule (but no notion of uncertainty) can be forced to suffer essentially linear regret even in the simple bandit case (O(T^(1-eps)) for any eps > 0) which of course means that it doesn't explore efficiently at all (see Singh 2000, Cesa-Bianchi 2017). Theorem 4 does not imply efficient exploration, since it requires very strong conditions on the alphas, and note that the same proof applies to vanilla Q-learning, which we know does not explore well.\n\nI presume the title of this paper is a homage to the recent 'Boltzmann Exploration Done Right' paper, however, though the paper is cited, it is not discussed at all. That paper proved a strong regret bound for Boltzmann-like exploration in the bandit case, which this paper actually does not for the RL case, so in some sense the homage is misplaced. Another recent paper that actually does prove a regret bound for a Boltzmann policy for RL is 'Variational Bayesian Reinforcement Learning with Regret Bounds', which also anneals the temperature, this should be mentioned.\n\nAll this is not to say that the paper is without merit, just that the main claims about exploration are not valid and consequently it needs to be repositioned. If the authors do that then I can revise my review.\n\nAlgorithm 2 has two typos related to s' and a'.", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["HklTrpKQlN", "BJlC7pKmxN", "ByxIovASCX", "rkx-AhYRaX", "BylNDqY0T7", "Skx9B9YRa7", "r1xsbcFA6Q"], "comment_cdate": [1544949060928, 1544949030342, 1543002013684, 1542524104912, 1542523483803, 1542523457711, 1542523394613], "comment_tcdate": [1544949060928, 1544949030342, 1543002013684, 1542524104912, 1542523483803, 1542523457711, 1542523394613], "comment_tmdate": [1544949060928, 1544949030342, 1543002013684, 1542524104912, 1542523483803, 1542523457711, 1542523394613], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper839/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper839/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper839/AnonReviewer1", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper839/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper839/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper839/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper839/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response", "comment": "Theorem 4 applies to any action selection policies that guarantees infinite visitations for states and actions, and epsilon-greedy is an example policy that satisfy the requirement. Please note that a common choice for such policy is epsilon greedy (e.g. the DQN algorithm). Although epsilon varies, it decays from 1.0 to 0.1 and remains 0.1 thereafter. As epsilon is not 0, it still guarantees infinite visits for states."}, {"title": "Response", "comment": "Thanks a lot for your reply. We have actually updated the paper accordingly, however, it seems that due to system errors our paper is not the latest version. We are sorry about this.\n\nA1: \nWe derive the bound of |L_b(X) - max(X)| \u2264 log(n)/b by showing that max(X) \u2264 L_b(X) \u2264 max(X) + log(n)/b as follows:\nAs b*max(X) = log(e^(max(b*X))) \u2264 log(\u2211e^(b*x_i)), we have b*max(X) \u2264 b*L_b(X)\nAs b*max(X) + log(n) = log(e^(max(b*X))) + log(n) = log(max(e^(b*X))) + log(n) = log(n*max(e^(b*X))) \u2265 log(\u2211e^(b*x_i)), we have b*L_b(X) \u2264 b*max(X) + log(n)\nCombining these inequalities, we have max(X) \u2264 L_b(X) \u2264 max(X) + log(n)/b.\n\nA2: \nPlease refer to Exercise 31.1 (a) of page 402 in Mackay\u2019s book (https://www.ece.uvic.ca/~agullive/Mackay.pdf). To the best of our knowledge, there is no proof of the bound of | L_b(X) - boltz_b(X) |, and we give a proposition here:\n\nL_b (X) - boltz_b(X) = 1/b \u2211-p_i log(p_i), where p_i is the weight of the Boltzmann distribution, i.e. p_i = e^(b*x_i)/\u2211e^(b*x_j). The proof is as follows:\n1/b \u2211-p_i log(p_i) = 1/b \u2211 ( -e^(b*x_i)/\u2211e^(b*x_j) ) * log( e^(b*x_i)/\u2211e^(b*x_j) )\n = 1/b \u2211 ( -e^(b*x_i)/\u2211e^(b*x_j) ) * ( b*x_i - log( \u2211e^(b*x_j) ) ) )\n = -\u2211 ( ( e^(b*x_i) * x_i ) / \u2211e^(b*x_j) ) + 1/b * log( \u2211e^(b*x_j) )\n = -boltz_b(X) + L_b(X)\n\nAs L_b(X) \u2265 boltz_b(X), we have | L_b(X) - boltz_b(X) | = 1/b \u2211-p_i log(p_i), where the right hand side is equal to the entropy of the Boltzmann distribution. The maximum of the right hand side is achieved when p_i=1/n, and equals to log(n)/b.\nThus, we have | L_b(X) - boltz_b(X) | \u2264 log(n)/b."}, {"title": "response to clarifications", "comment": "Reviewer 1 is right that corollary 1 is ok as is.\n\nWhere in Section 4.2 does it say that actions are selected to be epsilon-greedy. If that is the case, with fixed epsilon, Theorem 4 will be correct. But I don't see where that is assumed. Further, if that is assumed, its a poor choice of exploration scheme.\n\nI still can't verify the proof of Theorem 4.\n\n"}, {"title": "Summary of the updated version", "comment": "We thank the reviewers for their careful reading and thoughtful reviews. We have updated the submission accordingly and the main changes in the updated version of the paper include:\n\n+ we elaborate more about the exploration-exploitation dilemma in value function optimization\n+ we add empirical analysis of the exploration-exploitation dilemma\n+ we compare with G-learning in the GridWorld\n+ we discuss more related papers\n+ we refine experimental results of Atari\n+ we elaborate the details for the proof of Theorem 1"}, {"title": "To Reviewer2", "comment": "Thank you very much for the thoughtful reviews, especially for the exploration-exploitation trade-off.\n\nIn this paper, we aim to make the Boltzmann softmax operator converge from the view of trade-off between exploration and exploitation in value function optimization, instead of the traditional understanding in the action selection process. To be specific, in stochastic environments, the max operator updates the value estimator in a \u2018hard\u2019 way by greedily summarizing action-value functions according to current estimation. However, this may not be accurate due to noise in the environment. Even in deterministic environments, this may not be correct either. This is because the estimate for the value is not correct in the early phase of the learning process. We elaborate this and distinguish it from the exploration-exploitation trade-off in the updated version in Section 2.2 and Section 5.1.\n\nConsidering the title would be misleading, we change it accordingly.\n\nThank you for pointing out the reference paper. We cite and discuss the paper in the updated version in Section 6 (Related Work)."}, {"title": "Clarification to Reviewer1", "comment": "Thank you for the comments. We are afraid that you have some misunderstandings for our work.\n\nQ1: Theorem 1 is straightforward.\nA1: The effect of operators which are not non-expansion when applied in value iteration is an open problem and worth studying (Algorithms for Sequential Decision Making, Littman, 1996). Although error bounds of value iteration with the traditional max operator is well-established, there\u2019s no results for the Boltzmann softmax operator which violates the property of non-expansion. \n\nIn Theorem 1, we propose a novel analysis to characterize the error bound of the Boltzmann operator when applied in value iteration. Please note that this is the first time that the analysis is presented, and it is of vital importance as value iteration is the basis for RL algorithms.\n\t\nQ2: Corollary 1 may be technically wrong.\nA2: Please note that ||\u00b7||_{\\infty} denotes the L-\\infty norm, and ||V_0 - V^*||_{\\infty}, \\log{|A|}, \\beta, and \\gamma are all constants which will not change by taking the limit of t. Corollary 1 is derived by taking the limit of t in both sides of Inequality (6) in Theorem 1. \n\nQ3: Theorem 4 may be wrong. Stronger conditions are required.\nA3: Theorem 4 is correct. In our DBS Q-learning algorithm, the action selection policy is epsilon-greedy. Thus, states will be visited infinitely often. In addition, different from (Singh et al., 2000), where they study on-policy reinforcement learning algorithm (Sarsa), \\beta is state-independent here and thus is more flexible. Please also note that the main result of the paper is the characterization of the (dynamic) Boltzmann softmax operator in value iteration (Theorem 1, Theorem 2, and Theorem 3). We then apply the DBS operator in a well-known off-policy reinforcement learning algorithm, i.e Q-learning, and Theorem 4 is to guarantee the convergence of the resulting DBS Q-learning algorithm. \n\nQ4: Cannot find Lemma 2.\nA4: Lemma 2 refers to the stochastic approximation lemma (Lemma 1) in Section 3.1 of (Singh et al., 2000)."}, {"title": "To Reviewer3", "comment": "Thank you for the comments. Please find our responses as below, especially for the novelty of the work.\n\nQ1: The novelty of the DBS operator.\nA1: First of all, thank you for viewing our analysis for DBS novel. As we mentioned in the paper and showed by the corresponding title, we mainly aim to enable the convergence of the widely-used Boltzmann operator by a better exploration-exploitation trade-off, which is dispensable for reinforcement learning. As far as we know, it is the first time that we find a variant of the Boltzmann operator with good convergence rate.\n\nAlthough the state-dependent weighting of Boltzmann operator is proposed in (Singh et al. 2000), our DBS operator is state-independent and can scale to high-dimensional state space, which is crucial for RL algorithms. Furthermore, their operator is for on-policy RL algorithm, i.e. SARSA, while our DBS is for value iteration (a basic algorithm to solve the MDP) and Q-learning (a more popular off-policy RL algorithm). Therefore, our Q-learning algorithm with DBS is novel.\n\nDue to the difference of our algorithms and that in (Singh et al. 2000), we develop new techniques to prove the convergence. Specifically, for value iteration, we propose a novel analysis to characterize the error bound of value iteration with the Boltzmann operator, prove the convergence and present convergence rate analysis; for Q-learning, we leverage the stochastic approximation lemma (SA Lemma) presented in (Singh et al. 2000), which is an extension of the classic stochastic approximation theorem proven in (Jaakkola et al. 1994), to relate the process to the well-defined stochastic process in SA Lemma and then we quantify the additional term using similar techniques in our Theorem 1. Our results of value iteration have little relation with (Singh et al. 2000) and are mainly based on our own analysis (Proposition 1, Theorem 1, Theorem 2, and Theorem).\n\nQ2: What is the action selection policy? The states should be visited infinitely.\nA2: In our DBS Q-learning algorithm, the action selection policy is epsilon-greedy. Thus, states will be visited infinitely often. We make it clearer in the updated version.\n\nPlease note that, the exploration-exploitation dilemma here is related to value function optimization (Asadi et al. 2017), rather than the traditional view of exploring the environment and exploiting the action during the action selection process. In stochastic environments, the max operator updates the value estimator in a \u2018hard\u2019 way by greedily summarizing action-value functions according to current estimation. However, this may not be accurate due to noise in the environment. Even in deterministic environments, this may not be accurate either. This is because the estimate for the value is not correct in the early stage of the learning process. We elaborate the effect of exploration and added empirical study in the updated version, please refer to Section 5.1.\n\nQ3: |L(Q) - max(Q)| <= log(A||) / beta is not immediately clear.\nA3: We give more details of the proof in the updated version, please refer to Appendix B.\n\nQ4: Non-expansion is not necessary for convergence.\nA4: Yes, non-expansion is an important and widely-used sufficient condition to guarantee the convergence of the learning problem (Littman 1996, Asadi et al. 2017). In this understanding, we say non-expansion is \u2018vital\u2019 for convergence. (Bellemare et al. 2016) proposed an alternative sufficient condition different from the non-expansion property. However, the condition is still not enough to cover common operators violating non-expansion such as the Boltzmann softmax operator. \n\nQ5: Detailed comments for the experiment. \nA5: Here are our quick feedbacks.\n1) We compare with G-learning and analyze the effect in the updated version (Section 5.1).\n2) We change the score to raw game scores in the updated version (Appendix H). \n3) Please note that our score listed is exactly the same with \u2018Dueling Network Architectures for Deep Reinforcement Learning\u2019 and \u2018Rainbow\u2019, where the (original) scores for DQN are raw scores. \n4) In our experiments, c is in [0, 1], and we have tuned the value of c in some of the games. This is because different games have different features and should have different values of c.\n5) We have redrawn the plots to make it more reader-friendly and corrected some typos in the updated version."}], "comment_replyto": ["ByxIovASCX", "r1lxzs0hyE", "Skx9B9YRa7", "B1MhpiRqFm", "Syer4w_d2m", "SJlRd9sYnQ", "Hklc614o37"], "comment_url": ["https://openreview.net/forum?id=B1MhpiRqFm¬eId=HklTrpKQlN", "https://openreview.net/forum?id=B1MhpiRqFm¬eId=BJlC7pKmxN", "https://openreview.net/forum?id=B1MhpiRqFm¬eId=ByxIovASCX", "https://openreview.net/forum?id=B1MhpiRqFm¬eId=rkx-AhYRaX", "https://openreview.net/forum?id=B1MhpiRqFm¬eId=BylNDqY0T7", "https://openreview.net/forum?id=B1MhpiRqFm¬eId=Skx9B9YRa7", "https://openreview.net/forum?id=B1MhpiRqFm¬eId=r1xsbcFA6Q"], "meta_review_cdate": 1544790018840, "meta_review_tcdate": 1544790018840, "meta_review_tmdate": 1545354524671, "meta_review_ddate ": null, "meta_review_title": "Meta-review", "meta_review_metareview": "Pros:\n- a method that obtains convergence results using a using time-dependent (not fixed or state-dependent) softmax temperature.\n\nCons:\n- theoretical contribution is not very novel\n- some theoretical results are dubious\n- mismatch of Boltzmann updates and epsilon-greedy exploration\n- the authors seem to have intended to upload a revised version of the paper, but unfortunately, they changed only title and abstract, not the pdf -- and consequently the reviewers did not change their scores.\n\nThe reviewers agree that the paper should be rejected in the submitted form.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper839/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1MhpiRqFm¬eId=HygoWg7-x4"], "decision": "Reject"}