{"forum": "B1l3M64KwB", "submission_url": "https://openreview.net/forum?id=B1l3M64KwB", "submission_content": {"title": "How many weights are enough : can tensor factorization learn efficient policies ?", "authors": ["Pierre H. Richemond", "Arinbjorn Kolbeinsson", "Yike Guo"], "authorids": ["phr17@ic.ac.uk", "ak711@imperial.ac.uk", "y.guo@imperial.ac.uk"], "keywords": ["reinforcement learning", "Q-learning", "tensor factorization", "low-rank approximation", "data efficiency", "second-order optimization", "scattering"], "abstract": "Deep reinforcement learning requires a heavy price in terms of sample efficiency and overparameterization in the neural networks used for function approximation. In this work, we employ tensor factorization in order to learn more compact representations for reinforcement learning policies. We show empirically that in the low-data regime, it is possible to learn online policies with 2 to 10 times less total coefficients, with little to no loss of performance. We also leverage progress in second order optimization, and use the theory of wavelet scattering to further reduce the number of learned coefficients, by foregoing learning the topmost convolutional layer filters altogether. We evaluate our results on the Atari suite against recent baseline algorithms that represent the state-of-the-art in data efficiency, and get comparable results with an order of magnitude gain in weight parsimony.", "pdf": "/pdf/d460c69d0084c2a2642e45a213ca42d7616f8254.pdf", "paperhash": "richemond|how_many_weights_are_enough_can_tensor_factorization_learn_efficient_policies_", "original_pdf": "/attachment/08c0683394bb4bb134b51991954c420abdd4c39e.pdf", "_bibtex": "@misc{\nrichemond2020how,\ntitle={How many weights are enough : can tensor factorization learn efficient policies ?},\nauthor={Pierre H. Richemond and Arinbjorn Kolbeinsson and Yike Guo},\nyear={2020},\nurl={https://openreview.net/forum?id=B1l3M64KwB}\n}"}, "submission_cdate": 1569438995568, "submission_tcdate": 1569438995568, "submission_tmdate": 1577168220474, "submission_ddate": null, "review_id": ["ryxByGUaKS", "BkxhBqBBFB", "HyeUam4aKB"], "review_url": ["https://openreview.net/forum?id=B1l3M64KwB¬eId=ryxByGUaKS", "https://openreview.net/forum?id=B1l3M64KwB¬eId=BkxhBqBBFB", "https://openreview.net/forum?id=B1l3M64KwB¬eId=HyeUam4aKB"], "review_cdate": [1571803612900, 1571277380338, 1571795902442], "review_tcdate": [1571803612900, 1571277380338, 1571795902442], "review_tmdate": [1574313199283, 1572972596719, 1572972596683], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper426/AnonReviewer2"], ["ICLR.cc/2020/Conference/Paper426/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper426/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1l3M64KwB", "B1l3M64KwB", "B1l3M64KwB"], "review_content": [{"experience_assessment": "I have read many papers in this area.", "rating": "3: Weak Reject", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "title": "Official Blind Review #2", "review": "The paper aims at parsimonious reinforcement learning by employing 3\ndifferent techniques: using tensor regression layers (Kossaifi et al.,\n2017b), wavelet scattering (Mallat 2011) and using K-FAC (Kingma & Ba,\n2014) as the optimization method. Learning models with fewer\nweights is important not only in reinforcement learning but also\nin all other machine learning areas. With the combination of tensor\nregression layers and K-FAC, the proposed methods give comparable\nperformance on several Atari games against 2 other methods, SimpPLe\nand data-efficient Rainbow, while using 2 to 10 times fewer\ncoefficients than data-efficient Rainbow. The use of wavelet\nscattering provides improvement on 1 out of 26 Atari games. The paper\nalso points out an interesting concentration of eigenvalues of dense\nlayer of a deep RL agent which provides motivation for low-rank\npresentations.\n\nWhile the savings in terms of coefficients is positive, the obtained\nresults are of little surprise. Tensor regression layers and K-FAC are\nused as is without any modification while space savings and efficiency\nhave been reported in corresponding references. The performances of\nwavelet scattering for the reported tasks are weak (better in only one\ngame) and the space saving is not clear. The proposed improvements\nseem to be tailored to tasks with image inputs and hence reported\nresults are only on Atari games (possibly with sparse and low-rank\nimages). It is not clear if the proposed techniques can be applied to\na wider set of reinforcement learning tasks\n(e.g. https://gym.openai.com/envs/#mujoco).\n\n\nIt would be interesting to see if we can apply the proposed methods to\nother more diverse RL tasks. The performance of wavelet scattering does indeed need\nmore investigation and improvement. It would also be interesting to\ncompare the distributions of the eigenvalues of the tensor layers\nversus the dense layers in deep RL which may provide insights on the\nachieved savings and the compression trade-off.\n\n\nI have read the authors' rebuttals. The reviews point to a number of directions \nwhere the contributions could be made more significant.\n\n", "review_assessment:_checking_correctness_of_derivations_and_theory": "N/A"}, {"rating": "1: Reject", "experience_assessment": "I have published in this field for several years.", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #3", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review": "This paper suggests three different disconnected ideas for improving the number of parameters of deep vision models for playing ATARI and to improve the training speed.\n- Tensor-regression layer to replace fully connected layers\n- Wavelet-scattering layer to replace the first convolutional layer\n- Second order optimization (K-FAC)\nAll the ideas mentioned in this paper are existing ones (although properly attributed), so the novelty of this work is relatively low. \nThe paper mentions that this particular combination is \"novel\", but it is not clear is there is any significant synergy between these methods and why it should be considered interesting in this particular setup.\nAlso the paper conflates sample-efficiency with parameter-efficiency. However, there is no indication that any of these methods address sample-efficiency which would be an interesting and useful contribution.\nAlso the experiments are neither very conclusive nor are they easy to interpret. For example in the pong case, there is no discernable effect of the compression ratio as the highest and lowest compression give the best (and comparable) results. Also the results come without confidence intervals.\n\nSo, in general, I would consider this paper to be an uninspired combination of pre-existing ideas with weak and inconclusive experimental results: a clear reject."}, {"experience_assessment": "I have read many papers in this area.", "rating": "3: Weak Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #1", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "This paper investigates a list of methods to reduce the number of weights for deep RL architecture under the low-data regime. These methods include tensor regression, wavelet scattering, as well as second-order optimization (K-FAC). The experiments on the Atari games shows that by using tensor regression to replace the dense layer of the neural nets and using K-FAC for the optimization, one can reduce around 10 times of parameters without losing too much of performance. \n\nHowever, I have some concerns on the novelty of this work and therefore I\u2019m giving this paper a weak reject. Here are the reasons:\n\nTo begin with, leveraging tensor structure of the neural nets to reduce number of parameters while maintaining similar level or getting even better results have been done before, for example: Tensorizing Neural Networks (Novikov et al, 2015), Learning compact recurrent neural networks with block-term tensor decomposition (Ye et al., 2018) etc. Although the use of tensor regression might be new, the core idea is still to leverage the low rank property of the tensor and obtain a compression of the weight tensors. Moreover, why use Tucker decomposition specifically for the tensor regression? It has been proposed that using tensor train (TT) decomposition can also get very good results (see Garipov et. al. Ultimate tensorization: compressing convolutional and FC layers alike). Is it possible to investigate the use of TT decomposition for the dense layer of the deep RL architecture? Therefore the novelty for this aspect seems a bit weak for me.\n\nThe second method the authors have attempted is to swap the convolution layer of the deep RL architecture with wavelet scattering. For one particular game (demon_attack), this approach seems to outperform every other methods by a large margin. However the experiment shows that for the rest of the Atari games, there is a huge drop (45%) of performance. Therefore the significance of this approach is rather thin for me. Maybe some further investigation of the game demon_attack is needed to understand why using scattering in this game in particular gives such a huge performance boost. \n\nThirdly, as an approximation of the second order optimization, K-FAC does not really concern with the main theme of the paper, which is an investigation of potential weights reduction methods. It is great that the authors applied this techniques and seems to have great results. However, as the authors pointed out, K-FAC has been wildly applied in the deep RL literature, and the authors did not propose new extension for the K-FAC method, therefore the contribution of this matter is also quite thin. \n\nLast but not least, the writing of the paper is a bit clumsy, and I was having a hard time to figure out what exactly is the proposed method. I think this paper might need some rework on the writing to describe the idea of the authors in a more clear way for the publication. Due to these reasons, I\u2019m giving this paper a weak reject. \n\nSome writing comments and potential writing errors (did not affect the decision):\nPage 3, first line of \u201cTensor regression layer\u201d, the shape of the tensor X seems to be a typo. \nAlso here, the definition of _N in the paper is to sum over the dimension of I_1\u2026I_N, then the shape of _N should be K_1*\u2026*K_x*L_1*\u2026*L_y, without the I_N in the middle. \nAlso in this section, the authors mentioned Tucker decomposition for the tensor regression. However the phrasing of this sentence needs a bit rework. The usage of \u201cFor instance here\u201d, gives the readers a feeling that Tucker is just one possible way of doing this decomposition, but not necessarily the actual decomposition for the reported experiments. \nIn 2.3, there is a lack of definition for \\Lambda_1 and \\Lambda_2. In addition, it would be better for the general readers to add a few definitions for the terminologies in this section. For example, \u201ccircular convolution\u201d, \u201cwavelet filter banks\u201d etc. I guess people with corresponding background will understand it with no problem, however I do find myself a bit lost in this section with these terminologies. \n2.4 line 6, \u201cwith A and. B smaller, architecture-dependent matrices\u201d. I think it should be \u201cwith A and B being\u2026\u201c\n3.1, line 5, \u201cThis is all the more pressing that\u2026.\u201d, I did not understand this sentence. \nIn page 6, line 3, there is a lack of definition for \u201ccompression rate\u201d. Is it the compression rate w.r.t only the last dense layer, or w.r.t the whole network?\nFigure 4 is lacking y-axis and x-axis labels. \n4.2, last bullet point, \u201cHowever, one must not forget that the conv layers one learns must be somehow be well adapted\u2026\u201d, I get what you are saying, but the sentence is a bit clumsy. \nTable 1 and 2, the row name \u201cAverage\u201d is lacking definition. \n\n Overall it is a good attempt to reduce the number of weights in the deep RL architecture, but I do think the novelty of this work is a bit thin and the three contributions were not tied together with the main theme of the paper. Therefore, I\u2019m giving this work a weak reject. \n"}], "comment_id": ["B1gxGB4oiS", "rkx8g5goiS", "BJx_XZmosS", "rJgi_7-sjS"], "comment_cdate": [1573762312475, 1573747182050, 1573757216341, 1573749618551], "comment_tcdate": [1573762312475, 1573747182050, 1573757216341, 1573749618551], "comment_tmdate": [1573762312475, 1573759836259, 1573757216341, 1573749618551], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper426/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper426/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper426/AnonReviewer3", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper426/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Revision uploaded", "comment": "We genuinely thank Reviewer #1 for their very thorough and helpful review.\n\nWe have uploaded a revision to the paper which we hope addresses several qualms with writing and imprecisions or typos that were noted. We are also thankful for the suggestion of a signal processing refresher which we will ultimately add to the appendix, although a self-contained 'short' and intuitive exposition of wavelet filter banks is likely to be several pages long.\n\nRegarding tensor train decomposition, this is a useful method that, while more recent, we chose not to employ. It created even more complexity in the exposition of the paper in exchange for gains that did not seem significant during our preliminary testing - for instance, the reference by Novikov et al. (2015) achieves a compression factor of 7 times, whereby our work achieves between 5 and 10 times (we have clarified as per your remark that this is with respect to the total number of weights in the network). Therefore we went for the simpler method. We will consider using tensor train in the future. But the task is made difficult due to the heavy computational load on Atari.\n\nWe would genuinely be grateful for examples of use of K-FAC in the RL literature besides ACKTR. We do agree that scattering results as they stand require more work.\n\nAs for the novelty of the work, we would like to defend it and stress that to our knowledge, no other prior work addresses tensor factorization as an approximation method compatible with current state-of-the-art deep RL algorithms. Given the complexities of optimization in RL (see the analogy of actor-critic with GANs (Pfau and Vinyals, 2016), multilevel optimization, or saddle point formulation (Mahadevan et al. , 2014) ), and the fact the MDP exploration introduces shifts in state-space distribution, it was not a foregone conclusion that making gradient descent happen in a tiny subspace from the get-go would not hinder the process of learning to the point of non-convergence. Filling this gap in the literature and quantifying results, especially on a challenging domain like Atari, therefore appeared a necessary stepping stone to us."}, {"title": "Novelty", "comment": "First off, we would like to thank Reviewer #2 for their time and their review. In particular, the suggestion to compare eigenvalues of the tensor layers versus the tensor layers is very welcome, and we will implement and explore it in further versions of the paper. As more computational power gets available, we will also accordingly look to compare MuJoCo from pixels versus their standard parameterization. It is however our belief (supported by additional experiments on toy models, and the fact that scattering on pixels wasn't critical anyway) that the tensor factorization method in the dense layer will remain broadly applicable and provide similar results.\n\nThe weights gains for the scattering method chosen there, deemed unclear, are presented in the paper in the legend of table 2 - to quote, 'The Scattering column also includes KFAC optimization and TRL 5x, resulting in around 10x total weights efficiency gains'.\n\nThe reviewer's comment that 'While the savings in terms of coefficients is positive, the obtained results are of little surprise' has however left us perplexed. One of the motivations for writing this paper was the question of whether low-rank approximation would actually be *compatible* with the RL process. In this, and to the best of our knowledge, our paper is the first work to demonstrate proof of this concept in the RL setting. Given the complexities of optimization in reinforcement learning (see the analogy of actor-critic with GANs (Pfau and Vinyals, 2016), multilevel optimization, or saddle point formulation (Mahadevan et al. , 2014) ), and the fact the MDP exploration introduces shifts in state-space distribution, it was not a foregone conclusion than making gradient descent happen in a tiny subspace from the get-go would not hinder the process of learning to the point of non-convergence. This is one of the reasons why the eigenvalues exploration section is important and interesting, and this is all the more relevant in the small data regime we concern ourselves with ! *What is more*, even if results were not surprising ex-post, the quantification of tensor approximation error in previous deep learning works (accuracy loss) does not carry over to the RL setting, and again to our knowledge no other works have evaluated this trade-off explicitly. We therefore believe that our work opens up possibilities for light RL models that ultimately explore better."}, {"title": "Thanks for the response", "comment": "I have went through the paper again and I still find it hard to interpret the graphs and the results. It is not clearly marked what the different algorithms are, which ones are variants of those in the paper and which are the baselines.\n\nHowever, it does not matter that much, since the main issues of the paper is lack of novelty which is impossible to fix."}, {"title": "Clarification of results and novelty", "comment": "We thank Reviewer #3 for their time. We are extremely surprised that our experimental results are not clear:\n\n1. The main results in the paper are summarized in the two bolded bottom lines of tables 1 and 2. Across 26 Atari games with 3 random seeds each, tensor factorization itself incurs little performance loss in the low-data regime up till 5 times compression, but this can be cured with second-order optimization all the way to 10 times compression, as per table 2. \n2. Standard deviation of all these experiments results in Table 1 and 2 has been available in appendix since v1. \n3. If the author refers to figure 3 as 'inconclusive', it's meant to provide proof of concept, even on a simple noisy algorithm (hard-max non-distributional DQN), and does show that opposite direction effects happen between extra exploration and drawdowns. This in turn motivates the experiments of section 4.2 where the net impact of the results is averaged across many Atari environments to no ambiguous conclusion.\n\nThe novelty of the paper resides in its domain of application. While we would welcome prior art references, to our knowledge, applying online tensor factorization to deep RL algorithms is effectively a virgin field.\nAll the more so when using data-efficient algorithms, on a complex RL domain like Atari.\n\nConsequently, we would be extremely grateful for additional constructive criticism and directions as to how to improve future iterations of the paper."}], "comment_replyto": ["HyeUam4aKB", "ryxByGUaKS", "rJgi_7-sjS", "BkxhBqBBFB"], "comment_url": ["https://openreview.net/forum?id=B1l3M64KwB¬eId=B1gxGB4oiS", "https://openreview.net/forum?id=B1l3M64KwB¬eId=rkx8g5goiS", "https://openreview.net/forum?id=B1l3M64KwB¬eId=BJx_XZmosS", "https://openreview.net/forum?id=B1l3M64KwB¬eId=rJgi_7-sjS"], "meta_review_cdate": 1576798696086, "meta_review_tcdate": 1576798696086, "meta_review_tmdate": 1576800939552, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "In this paper dense layers in deep neural networks representing policies are replaced by tensor regression layers, also by a scattering layer, and second-order optimization is considered. The paper does not have a single consistent message, and combines different techniques for unclear reason. Important related work is not cited. The presentation was found unclear by the reviewers. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1l3M64KwB¬eId=22i8hPtNqL"], "decision": "Reject"}