{"forum": "B1gX8JrYPr", "submission_url": "https://openreview.net/forum?id=B1gX8JrYPr", "submission_content": {"authorids": ["bwkevintan@gmail.com", "zhitinghu@gmail.com", "yangtze2301@gmail.com", "rsalakhu@cs.cmu.edu", "epxing@cs.cmu.edu"], "title": "Connecting the Dots Between MLE and RL for Sequence Prediction", "authors": ["Bowen Tan", "Zhiting Hu", "Zichao Yang", "Ruslan Salakhutdinov", "Eric Xing"], "pdf": "/pdf/56c20197d5210fad94a8e875149cc7e792c9a359.pdf", "TL;DR": "An entropy regularized policy optimization formalism subsumes a set of sequence prediction learning algorithms. A new interpolation algorithm with improved results on text generation and game imitation learning.", "abstract": "Sequence prediction models can be learned from example sequences with a variety of training algorithms. Maximum likelihood learning is simple and efficient, yet can suffer from compounding error at test time. \nReinforcement learning such as policy gradient addresses the issue but can have prohibitively poor exploration efficiency. A rich set of other algorithms, such as data noising, RAML, and softmax policy gradient, have also been developed from different perspectives. \nIn this paper, we present a formalism of entropy regularized policy optimization, and show that the apparently distinct algorithms, including MLE, can be reformulated as special instances of the formulation. The difference between them is characterized by the reward function and two weight hyperparameters.\nThe unifying interpretation enables us to systematically compare the algorithms side-by-side, and gain new insights into the trade-offs of the algorithm design.\nThe new perspective also leads to an improved approach that dynamically interpolates among the family of algorithms, and learns the model in a scheduled way. Experiments on machine translation, text summarization, and game imitation learning demonstrate superiority of the proposed approach.", "code": "https://drive.google.com/file/d/13diaxzuxTSB-DReqEhkYPMmZ4BQ6vsEo/view", "keywords": ["Sequence generation", "sequence prediction", "reinforcement learning"], "paperhash": "tan|connecting_the_dots_between_mle_and_rl_for_sequence_prediction", "original_pdf": "/attachment/56c20197d5210fad94a8e875149cc7e792c9a359.pdf", "_bibtex": "@misc{\ntan2020connecting,\ntitle={Connecting the Dots Between {\\{}MLE{\\}} and {\\{}RL{\\}} for Sequence Prediction},\nauthor={Bowen Tan and Zhiting Hu and Zichao Yang and Ruslan Salakhutdinov and Eric Xing},\nyear={2020},\nurl={https://openreview.net/forum?id=B1gX8JrYPr}\n}"}, "submission_cdate": 1569439563088, "submission_tcdate": 1569439563088, "submission_tmdate": 1577168290301, "submission_ddate": null, "review_id": ["S1xxM8j8Fr", "HylY7KLAKB", "S1xROF9CFr"], "review_url": ["https://openreview.net/forum?id=B1gX8JrYPr¬eId=S1xxM8j8Fr", "https://openreview.net/forum?id=B1gX8JrYPr¬eId=HylY7KLAKB", "https://openreview.net/forum?id=B1gX8JrYPr¬eId=S1xROF9CFr"], "review_cdate": [1571366407931, 1571871008522, 1571887478378], "review_tcdate": [1571366407931, 1571871008522, 1571887478378], "review_tmdate": [1572972431808, 1572972431770, 1572972431726], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper1723/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper1723/AnonReviewer1"], ["ICLR.cc/2020/Conference/Paper1723/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1gX8JrYPr", "B1gX8JrYPr", "B1gX8JrYPr"], "review_content": [{"rating": "3: Weak Reject", "experience_assessment": "I have published one or two papers in this area.", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "title": "Official Blind Review #1723", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "review": "This paper claims to propose a general entropy regularized policy optimization paradigm. MLE and RL are special cases of this training paradigm. Paper is well written, and the experimental results are convincing enough. \nHowever, there are still some minor problems in the paper. For the optimization framework ERPO (shown in Equation 1), it consists of three parts, a cross-entropy term (Shannon entropy), a $p,q$ KL divergence term, and a reinforcement learning reward loss item. From the framework point of view, it is not like the author claim that is supposed to present a general optimization framework, including various optimization algorithms. Instead, it is just a combined loss through weight control and the selection of corresponding functions. It may not really theoretically work to unify various types of optimization algorithms for general cases, let alone claiming that this is a general optimization algorithm framework. \n\nFor the interpolation algorithm (I regard this is the true technical contribution of this paper), the authors used an annealing mechanism to use different weights and functions at different stages of training. The essence is that after MLE pre-training, different optimization algorithms are used in different stages, and this should be the focus of the article. The annealing settings used is only introduced in the appendix simply. Without more comparison experiments, we cannot clearly get the conditions for the annealing algorithm to be effective and ineffective. \n\nFor the title of connecting the dots between MLE and RL, this paper did not do so, MLE and RL are only used collaboratively, and this has also been mentioned in previous work.\n\ntypo\nPage 6 Paragraph \u201cOther Algorithms & Discussions\u201d: We We show in the appendix\u2026 -> We show in the appendix\u2026\n"}, {"experience_assessment": "I have published in this field for several years.", "rating": "3: Weak Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "title": "Official Blind Review #1", "review_assessment:_checking_correctness_of_derivations_and_theory": "I carefully checked the derivations and theory.", "review": "This submission belongs to the field of sequence modelling. In particular, this submission presents a unified view on a range of training algorithms including maximum likelihood (ML) and reinforcement learning (RL). The unified view presented I believe is interesting and could be of interest to a large community. Unfortunately this submission has two issues 1) presentation and 2) experimental validation. \n\nI find it peculiar that an objective function that features ML and variants of RL as special cases called ERPO is proposed by statement. I find it more likely that it came out by analysing ML, the variants of RL and other commonly used objective functions, noticing similarities between them and then formulating a function that would render all above as special cases. Had the order been different this submission would have been much more analytical and interesting to read. \n\nI find experimental results a bit limited and not entirely conclusive as it seem that MT provides the only strong experimental evidence. I find quite hard to interpret the significance of difference, for instance, between 36.72 and 36.59 in ROUGE-1. "}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "6: Weak Accept", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "This paper presents a formalism of entropy regularized policy optimization. They also show that various policy gradients algorithms can be reformulated as special instances of the presented novel formalism. The only difference between them being the reward function and two weight hyperparameters. Further, the paper proposes an interpolation algorithm, which, as training proceeds, gradually expands the exploration of space by annealing the reward function and the weight hyperparameters. Experiments on text generation tasks and game imitation learning show superior performance over previous methods. \n\nOverall, the paper is well written and the derivations and intuitions sound good. I appreciate the overall effort of the paper and the thorough experiments to validate the proposed interpolation algorithm, results seem not significant for text summarization. Hence, I suggest a week accept for this paper. \n\nArguments:\n1) From Table 1 and Table 2, the proposed approach has the lowest variance on machine translation and the quiet opposite on the text summarization (i.e., it has high variance). Any thoughts on this? This also suggests to conduct experiments on ablating the variance in the training for various policy gradient approaches include the proposed one. \n\n2) Results seem not significant on the summarization tasks. Any thoughts on choosing this particular task? Why not try image captioning where most of these policy gradient approaches have been applied. \n"}], "comment_id": ["HJgYDw2osr", "rJgImv3sor", "Bkev-vnsoB"], "comment_cdate": [1573795680976, 1573795613951, 1573795583326], "comment_tcdate": [1573795680976, 1573795613951, 1573795583326], "comment_tmdate": [1573795680976, 1573795644456, 1573795583326], "comment_readers": [["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper1723/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1723/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper1723/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to Official Blind Review #1723", "comment": "Thanks for the comments! We\u2019d like to clarify that this paper aims to reformulate the various algorithms and distill them into a single common formulation. The common formulation is governed by the reward function and two weight hyperparameters, and thus defines a *family* of sequence prediction algorithms. Changing the specifications of the three factors (i.e., reward and weights) leads to different specific algorithms. It\u2019s indeed novel and non-trivial to reformulate these apparently distinct algorithms and discover the common underlying formulation. We will update the presentation (as also suggested by R#1) accordingly to make this contribution clearer.\n\nThe interpolation algorithm is an immediate production of the discovered common formulation, as it\u2019s a natural idea, once we see the common formulation, to anneal the three governing factors to \u201cinterpolate\u201d between the specific algorithms in the family. We add the discussion of the annealing settings in the main paper. It\u2019s also worth noting that the interpolation algorithm is just one of the many possible ways of taking advantage of the common formulation. For example, another natural idea would be to find (e.g., through hyperparameter optimization) the best configuration of the three governing factors in the common formulation, which is equivalent to finding the best optimization algorithm in the whole family and use it to learn sequence prediction. This shows the advantages of having the common formulation. We will make this clearer in the revised version.\n\nThis paper is the first to discuss the extensive set of algorithms (MLE, RAML, Data Noising, policy gradient, etc) jointly and find their common denominator. The resulting interpolation algorithm can also be seen as a generalization of previous approaches, such as MIXER, that \u201cuses MLE and RL collaboratively\u201d. (As discussed in the paper, MIXER can be seen as using a restricted annealing strategy in the proposed interpolation algorithm). The generalized interpolation algorithm also outperforms MIXER.\n\nWe will fix all typos and do proofread. Thanks for pointing this out."}, {"title": "Response to Official Blind Review #1 ", "comment": "Thanks for the great suggestion of adjusting the order of presentation! We agree that it can be clearer and more analytical by reaching the final formulation after visiting each individual algorithms. We will update the presentation in the revised version accordingly.\n\nBesides MT, the improvement in game imitation learning is indeed reasonably significant, especially when the number of expert demonstrations is small:\n ** On HalfCheetah-v2 with 4 demonstrations, our approach achieves *3000 higher* reward than GAIL; On Ant-v2, our approach achieves *800 higher* reward on average.\n ** The improvement level over GAIL is comparable or stronger than that in other papers (e.g., Fig.2 of NeurIPS-2018 https://arxiv.org/pdf/1805.08336.pdf; Fig.1 of ICML-2019 https://arxiv.org/pdf/1901.09387.pdf) which proposed to improve GAIL in different aspects orthogonal to ours.\n\nThe improvement on summarization is relatively moderate, partially because the output sequences of the task are short (8.2 tokens on average), while the proposed approach is designed for generating longer sequences (e.g., in the MT dataset, the output sequences contain 18.5 tokens on average).\n"}, {"title": "Response to Official Blind Review #2", "comment": "Thanks for the valuable and encouraging comments.\n\n(1) Thanks for the great question and suggestion! The average output length in the machine translation dataset is 18.5. That is, each target sequence contains 18.5 tokens on average. In contrast, the length is 8.2 in the text summarization dataset. We speculate these different output lengths lead to the different variances and improvement --- the proposed approach by design brings more benefits when the output sequences are longer (e.g., MT) by combating the compounding error. The approach thus gets relatively significant improvement and lower variance compared to the other methods on MT; while on summarization, the approach achieves moderate improvement and does not reduce the variance. We will add more analysis and discussions in the revised version.\n\n(2) Thanks for the suggestion! As above, the improvement on the summarization task is moderate due to the short output sequences in the dataset. We will conduct experiments on image captioning and add the results and analysis.\n"}], "comment_replyto": ["S1xxM8j8Fr", "HylY7KLAKB", "S1xROF9CFr"], "comment_url": ["https://openreview.net/forum?id=B1gX8JrYPr¬eId=HJgYDw2osr", "https://openreview.net/forum?id=B1gX8JrYPr¬eId=rJgImv3sor", "https://openreview.net/forum?id=B1gX8JrYPr¬eId=Bkev-vnsoB"], "meta_review_cdate": 1576798730787, "meta_review_tcdate": 1576798730787, "meta_review_tmdate": 1576800905705, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "The authors construct a weighted objective that subsumes many of the existing approaches for sequence prediction, such as MLE, RAML, and entropy regularized policy optimization. By dynamically tuning the weights in the objective, they show improved performance across several tasks.\n\nAlthough there were no major issues with the paper, reviewers generally felt that the technical contribution is fairly incremental and the empirical improvements are limited. Given the large number of high-quality submissions this year, I am recommending rejection for this submission.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1gX8JrYPr¬eId=0hzK1f5oJF"], "decision": "Reject"}