{"forum": "B1eO9oA5Km", "submission_url": "https://openreview.net/forum?id=B1eO9oA5Km", "submission_content": {"title": "A Guider Network for Multi-Dual Learning", "abstract": "A large amount of parallel data is needed to train a strong neural machine translation (NMT) system. This is a major challenge for low-resource languages. Building on recent work on unsupervised and semi-supervised methods, we propose a multi-dual learning framework to improve the performance of NMT by using an almost infinite amount of available monolingual data and some parallel data of other languages. Since our framework involves multiple languages and components, we further propose a timing optimization method that uses reinforcement learning (RL) to optimally schedule the different components in order to avoid imbalanced training. Experimental results demonstrate the validity of our model, and confirm its superiority to existing dual learning methods.", "keywords": [], "authorids": ["wenpeng.hu@pku.edu.cn", "tttzw@pku.edu.cn", "zhanxing.zhu@pku.edu.cn", "liub@uic.edu", "jokerlin@pku.edu.cn", "jwma@math.pku.edu.cn", "zhaody@pku.edu.cn", "ruiyan@pku.edu.cn"], "authors": ["Wenpeng Hu", "Zhengwei Tao", "Zhanxing Zhu", "Bing Liu", "Zhou Lin", "Jinwen Ma", "Dongyan Zhao", "Rui Yan"], "pdf": "/pdf/650795e676ea63ab875bbd9b7f8a8d1868916338.pdf", "paperhash": "hu|a_guider_network_for_multidual_learning", "_bibtex": "@misc{\nhu2019a,\ntitle={A Guider Network for Multi-Dual Learning},\nauthor={Wenpeng Hu and Zhengwei Tao and Zhanxing Zhu and Bing Liu and Zhou Lin and Jinwen Ma and Dongyan Zhao and Rui Yan},\nyear={2019},\nurl={https://openreview.net/forum?id=B1eO9oA5Km},\n}"}, "submission_cdate": 1538087823942, "submission_tcdate": 1538087823942, "submission_tmdate": 1545355419964, "submission_ddate": null, "review_id": ["r1lJnkygam", "rJeSu1F5hQ", "HkxHWLGthm"], "review_url": ["https://openreview.net/forum?id=B1eO9oA5Km¬eId=r1lJnkygam", "https://openreview.net/forum?id=B1eO9oA5Km¬eId=rJeSu1F5hQ", "https://openreview.net/forum?id=B1eO9oA5Km¬eId=HkxHWLGthm"], "review_cdate": [1541562278700, 1541209965467, 1541117436892], "review_tcdate": [1541562278700, 1541209965467, 1541117436892], "review_tmdate": [1541562278700, 1541533901119, 1541533900913], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1eO9oA5Km", "B1eO9oA5Km", "B1eO9oA5Km"], "review_content": [{"title": "Review", "review": "The paper proposes a guider network which utilized unlabeled monolingual data as an augmentation to the usual dual learning framework to improve NMT performance. Furthermore, a deep Q-learning style scheduling algorithm is proposed to optimize the overall architecture.\n\nThe writing of the paper needs a major improvement. As a reviewer, I had a very hard time trying to understand the paper, while the proposed idea turns out to be conceptually simple. A few points regarding the writing:\n1) Figure 1 is impossible to understand, especially that zero explanation is given in the caption.\n2) Too many unnecessary definitions and acronyms such as ISE, CISE, GLF, GDL, AE etc. Essentially, only the notion of bi-direction attention entropy is relevant for the purpose of the paper. Much effort should have been dedicated to explaining the idea of the of bi-direction attention entropy instead of irrelevant terminologies.\n3) No objective function or algorithm description is ever shown.\n\nTechnically, I am skeptical about the use of deep Q-learning as a scheduling algorithm. Usually, a Q-net requires training before it can be deployed in an evaluation environment. However, here the paper seems to suggest that the Q-net is trained and deployed together with the NMT architecture in an online fashion. Why use a Q-net in an online setting is beyond my understanding. Ideally, one would choose a truly online algorithm (i.e. UCB for stochastic bandits) in such scenarios, which I believe would work even better than deep Q-learning in practice.", "rating": "4: Ok but not good enough - rejection", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "Review", "review": "This paper make two contributions: (1) it propose a new framework for semi-supervised training for NMT by introduce constraint of encoder and decoder states. (2) It apply Q-learning to schedule the updates of different components. I personally highly believe find the relation between encoder and decoder hidden states is a very good direction for utilizing pair data. Model scheduling is also an important problem for multilingual-NMT. \n\nHowever, this paper is very hard to follow. \n1. It has lots of acronyms, e.g. section 3.1. It also try to over-complicated the algorithm and I don't think these acronyms are necessarily to be defined. \n2. It try to link it to information theory but most of study is just empirical (which is fine, but avoid it can simplify the writing and make it more readable), e.g. \" According to information theory and the attention mechanism\n(Bahdanau et al., 2014), it is clear that we..\" I agree with the intuition but how it can be \"if and only if\"? \n3. It said Figure 2 shows BDE better aligned with BLUE, is there a quantitative measure, e .g. correlation? Or I missed something.\n4. What is the NMT network structure?\n5. I have trouble to understand \"In this process, one monolingual data Si of language i would first be translated to hidden states (ISD) of deci through NMTi , then ISDi is used to reconstruct...\" \"Guided Dual Learning\" part.\n\nThe experimental results looks good, especially for low-resource case. But addressing of similarity and comparison with some previous methods could be improved. At least there is simply baseline which use pre-training. Adding some published SOTA results in the table can also help to understand how well it is.\n\nIn summary, the paper provide some interesting perspectives. However, it's hard to follow on the algorithm part and lack of relevant baseline.", "rating": "5: Marginally below acceptance threshold", "confidence": "2: The reviewer is willing to defend the evaluation, but it is quite likely that the reviewer did not understand central parts of the paper"}, {"title": "Not easy to follow; experiments not convincing", "review": "[Summary]\nThis paper proposes an extension of the dual learning framework, with a guider network and multiple languages included: (1) Each language $i$ has a guider network $GN_i$, that can be used to reconstruct the source sentence from either the output of the encoder or the output of the decoder. (2) Multiple languages are used in this framework, where each language also has a $GN_i$ for guiding the training according to the reconstruction error. The authors work on MultiUN dataset to verify their algorithms.\n \n[Clarity]\nThis work is not easy to follow. My suggestions to revise the paper are shown as follows:\n(1) Please use the \\begin{equation}\u2026\\end{equation} environment to clearly describe your framework and training objectives, with each notation, function and hyper-parameter clearly defined. Actually, I do not find the training objective function in this paper.\nBesides, currently, in this paper, there are many undefined notations and typos, for example, (1) in section 3.1, first paragraph, what is the $n$? Then in Eqn.(1) ,what is $N$ and $M$? Also, it is very confusing to use subscripts $i$ and $j$ to distinguish the hidden states from the encoder and decoder. (2) What is the mathematical definition of $ISE_i$? (3) In page 5, 3rd line, \u201cthen ISD_i is used to reconstruct Si = GNi(ISE_i , \\theta)\u2026\u201d Should the ISE_i be ISD_i?\n(2) Please use \\begin{algorithm}\u2026\\end{algorithm} to tell the readers how your framework works.\n \n[Details]\n1. The first question is \u201cwhy this problem\u201d. In the 3rd paragraph of page 1, you mentioned that \u201cHowever, the best direction to update parameters heavily relies on the quality of sampled translations ... which may be far from real translations Y due to inaccurate translations existing in the sampled ones\u2026\u2026\u201d But in practice, dual learning as well as back-translation [ref1] works well for many language pairs. In particular, the dual learning and back-translation works for the unsupervised NMT [ref2], where no labeled data is available. Therefore, I am not fully convinced by this claim and then, the motivation of this work. What\u2019s more, this paper does not work on standard WMT dataset, while previous dual learning and back-translation work on that most commonly used dataset. Therefore, the comparison between the guider network and dual learning are not fair.\n2. I am not sure how the BDE in Eqn. (1) is related to the NMT translation quality. Any reference or theoretical/empirical proofs? \n3. It is hard to reproduce such a complex NMT system with NMT, GN and an RL scheduler. Any open-source code or any simple solutions?\n4. Do you use a single-layer LSTM or a deep LSTM? Transformer [ref3] is the state-of-the-art NMT system. Why don\u2019t you choose this system? Also, you do not work on WMT dataset to verify your GLF-2L (Table 1). Therefore, I cannot justify whether the proposed algorithm is efficient compared to the current NMT algorithms. I am not convinced by the experimental results.\n5. The connection/difference between this work and (Tu et al 2017) should be discussed clearly, and you should implement (Tu et al 2017) as your baseline. Besides, for the 3-language setting, no multilingual baseline is implemented.\n \n[Pros & Cons]\n(+) This paper tries to extend dual learning from word level to hidden state level;\n(+) Multiple languages are involved in this framework;\n(-) Experiments are not convincing; the models are weak; many important baselines are missing; no results on widely used WMT datasets;\n(-) The paper is not easy to follow. (See [clarify] part for details);\n(-) Training process is a little complex; not easy to implement;\n \nReferences\n[ref1] Edunov, Sergey, et al. \"Understanding back-translation at scale.\" EMNLP 2018\n[ref2] Lample, Guillaume, et al. \"Phrase-Based & Neural Unsupervised Machine Translation.\" EMNLP 2018\n[ref3] Vaswani, Ashish, et al. \"Attention is all you need.\" Advances in Neural Information Processing Systems. 2017.\n ", "rating": "4: Ok but not good enough - rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1543787305332, "meta_review_tcdate": 1543787305332, "meta_review_tmdate": 1545354495189, "meta_review_ddate ": null, "meta_review_title": "Reject", "meta_review_metareview": "All reviewers agree in their assessment that this paper is not ready for acceptance into ICLR and the authors did not respond during the rebuttal phase.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper547/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1eO9oA5Km¬eId=SkxZ47AWy4"], "decision": "Reject"}