{"forum": "B1fPYj0qt7", "submission_url": "https://openreview.net/forum?id=B1fPYj0qt7", "submission_content": {"title": "Riemannian Stochastic Gradient Descent for Tensor-Train Recurrent Neural Networks", "abstract": "The Tensor-Train factorization (TTF) is an efficient way to compress large weight matrices of fully-connected layers and recurrent layers in recurrent neural networks (RNNs). However, high Tensor-Train ranks for all the core tensors of parameters need to be element-wise fixed, which results in an unnecessary redundancy of model parameters. This work applies Riemannian stochastic gradient descent (RSGD) to train core tensors of parameters in the Riemannian Manifold before finding vectors of lower Tensor-Train ranks for parameters. The paper first presents the RSGD algorithm with a convergence analysis and then tests it on more advanced Tensor-Train RNNs such as bi-directional GRU/LSTM and Encoder-Decoder RNNs with a Tensor-Train attention model. The experiments on digit recognition and machine translation tasks suggest the effectiveness of the RSGD algorithm for Tensor-Train RNNs. ", "keywords": ["Riemannian Stochastic Gradient Descent", "Tensor-Train", "Recurrent Neural Networks"], "authorids": ["jqi41@gatech.edu", "qij13@uw.edu", "javiertejedornoguerales@gmail.com"], "authors": ["Jun Qi", "Chin-Hui Lee", "Javier Tejedor"], "TL;DR": "Applying the Riemannian SGD (RSGD) algorithm for training Tensor-Train RNNs to further reduce model parameters.", "pdf": "/pdf/adcd9143448fd902ce58ddabf1283443b08ff865.pdf", "paperhash": "qi|riemannian_stochastic_gradient_descent_for_tensortrain_recurrent_neural_networks", "_bibtex": "@misc{\nqi2019riemannian,\ntitle={Riemannian Stochastic Gradient Descent for Tensor-Train Recurrent Neural Networks},\nauthor={Jun Qi and Chin-Hui Lee and Javier Tejedor},\nyear={2019},\nurl={https://openreview.net/forum?id=B1fPYj0qt7},\n}"}, "submission_cdate": 1538087807139, "submission_tcdate": 1538087807139, "submission_tmdate": 1545355416327, "submission_ddate": null, "review_id": ["rkxTWQopnX", "S1eQrXcnn7", "ByxE00TB37"], "review_url": ["https://openreview.net/forum?id=B1fPYj0qt7¬eId=rkxTWQopnX", "https://openreview.net/forum?id=B1fPYj0qt7¬eId=S1eQrXcnn7", "https://openreview.net/forum?id=B1fPYj0qt7¬eId=ByxE00TB37"], "review_cdate": [1541415684536, 1541346106644, 1540902603818], "review_tcdate": [1541415684536, 1541346106644, 1540902603818], "review_tmdate": [1541533982310, 1541533982105, 1541533981893], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1fPYj0qt7", "B1fPYj0qt7", "B1fPYj0qt7"], "review_content": [{"title": "Novelty is limited", "review": "\nSummary: \nThe paper proposes to use Riemannian stochastic gradient algorithm for low-rank tensor train learning in deep networks. \n\nComments:\nThe paper is easy to follow. \n\nC1.\nThe novelty of the paper is rather limited, both in terms of the convergence analysis and exploiting the low-rank structure in tensor trains. It misses the important reference [1], where low-rank tensor trains have been already discussed. Section 3 is also not novel to the paper. Consequently, Sections 2 and 3 have to be toned down. \n\nSection 4 is interesting but is not properly written. There is no discussion on how the paper comes about those modifications. It seems that the paper blindly tries to apply the low-rank constraint to the works of Chung et al. (2014) and Luong et al. (2015). \n\n[1] https://epubs.siam.org/doi/abs/10.1137/15M1010506\nSteinlechner, Michael. \"Riemannian optimization for high-dimensional tensor completion.\" SIAM Journal on Scientific Computing 38.5 (2016): S461-S484.\n\nC2.\nThe constraint tt_rank(W) \\leq r in (5) is not a manifold. The equality is needed for the constraint to be a manifold.\n\nC3.\nUse \\langle and \\rangle for inner products. \n", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "The paper presents RSGD algorithm on TT based RNN, which is interesting but the quality and significance is limited. ", "review": "In this paper, the authors proposed a new method to update the weights in RNN by SGD on Riemannian manifold. Due to the properties of manifold learning, the updated weights in each iteration are contracted with a low-rank structure, such that the number of the parameters of TT can be automatically decreased during the training procedure. By using the new algorithm, the authors modified two types of sophisticated RNNs, i.e., bi-directional GRU/LSTM and Encoder-Decoder RNN. The experimental results validate effectiveness of the proposed method. How to determine the rank of the tensor networks in weight compression problem is indeed an important and urgent task, this paper does not clearly illustrate how RSGD can efficiently solve this problem.\n\n1. Compared to the conventional SGD, not only the convergence rate of the proposed method seems slower (mentioned in the paper,), but also additional computational operations should be done in each iteration like exponential mapping (with multiple QR and SVD). I\u2019m worried about the computational efficiency of this method, but this paper neither discusss the computational complexity nor illustrate the results in the experimental section.\n\n2. In proof of proposition 1, I\u2019m confused why the input tensor X should belong to M, and why the eq. (8) holds?\n\n3. In the convergence analysis, I don\u2019t know why the eq. $Exp^{-1}(y)=-\\eta\u2026.$ holds even though the authors claims the it is not hard to find. So that, I cannot find the relationship between Theorem 3 and the proposed method. Furthermore, can Theorem 3 be used to prove the convergence of the proposed method?\n\n4. Eq. (16) would make no sense because the denominator might be very small. \n\n5. In the experiment, please compare with other existing (tensor decomposition based) compression methods to demonstrate how the proposed method makes sense in this task.\n\nMinior:\n1. By the definition in Oseledets\u2019 paper, the tensor decomposition model used in this paper should be called TT-matrix rather than TT.\n2. 9 ->(9) in Definition 2, and 15->(15) in the proof of Theorem 3.", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "A paper on Riemannian optimization, needs to fix some math and improve experiments", "review": "This paper proposes an algorithm for optimizing neural networks parametrized by Tensor Train (TT) decomposition based on the Riemannian optimization and rank adaptation, and designs a bidirectional TT LSTM architecture.\n\nI like the topic chosen by the authors, using TT to parametrize layers of neural networks proved to be beneficial and it would be very nice to exploit the Riemannian manifold structure to speed up the optimization.\n\nBut, the paper needs to be improved in several aspects before being useful to the community. In particular, I found the several mathematical errors regarding basic definitions and algorithms (see below the list of problems) and I\u2019m not happy with lack of baselines in the experimental comparison (again, see below).\n\nThe math problems\n1) In equations (1), (2), (7), and (8) there is an error: one should sum out the rank dimensions instead of fixing them to the numbers r_i. At the moment, the left-hand side of the equations doesn\u2019t depend on r and the right-hand side does.\n2) In two places the manifold of d-dimensional low-rank tensors is called d-dimensional manifold which is not correct. The tensors are d-dimensional, but the dimensionality of the manifold is on the order of magnitude of the number of elements in the cores (slightly smaller actually).\n3) The set of tensors with rank less than or equal to a fixed rank (or a vector of ranks) doesn\u2019t form a Riemannian (or smooth for that matter) manifold. The set of tensors of rank equal to a fixed rank something does.\n4) The function f() minimized in (5) is not defined (it should be!), but if it doesn\u2019t have any rank regularizer, then there is no reason for the solution of (5) to have rank smaller then r (and thus I don\u2019t get how the automatic rank reduction can be done).\n5) When presenting a new retraction algorithm, it would be nice to prove that it is indeed a retraction. In this case, Algorithm 2 is almost certainly not a retraction, I don\u2019t even see how can it reduce the ranks (it has step 6 that is supposed to do it, but what does it mean to reshape a tensor from one shape to a shape with fewer elements?).\n6) I don\u2019t get step 11 of Alg 1, but it seems that it also requires reshaping a tensor (core) to a shape with fewer elements.\n7) The rounding algorithm (Alg 3) is not correct, it has to include orthogonalization (see Oseledets 2011, Alg 2).\n8) Also, I don\u2019t get what is r_max in the final optimization algorithm (is it set by hand?) and how the presented rounding algorithm can reduce the rank to be lower than r_max (because if it cannot, one would get the usual behavior of setting a single value of rank_max and no rank adaptivity).\n9) Finally, I don\u2019t get the proposition 1 nor it\u2019s proof: how can it be that rounding to a fixed r_max won\u2019t change the value of the objective function? What if I set r_max = 1? We should be explained in much greater detail.\n10) I didn\u2019t get this line: \u201cFrom the RSGD algorithm (Algorithm 1), it is not hard to find the sub-gradient gx = \u2207f(x) and Exp\u22121 x (y) = \u2212\u03b7\u2207xf(x), and thus Theorem 3 can be derived.\u201d What do you mean that it is not hard to find the subgradient (and what does it equal to?) and why is the inverse of the exponential map is negative gradient?\n11) In general, it would be beneficial to explain how do you compute the projected gradient, especially in the advanced case. And what is the complexity of this projection?\n12) How do you combine optimizing over several TT objects (like in the advanced RNN case) and plain tensors (biases)? Do you apply Riemannian updates independently to every TT objects and SGD updates to the non-TT objects? Something else?\n13) What is E in Theorem 3? Expected value w.r.t. something? Since I don\u2019t understand the statement, I was not able to check the proof.\n\nThe experimental problems:\n1) There is no baselines, only the vanilla RNN optimized with SGD and TT RNN optimized with your methods. There should be optimization baseline, i.e. optimizing the same TT model with other techniques like Adam, and compression baselines, showing that the proposed bidirectional TT LSTM is better than some other compact architectures. Also, the non-tensor model should be optimized with something better than plain SGD (e.g. Adam).\n2) The convergence plots are shown only in iteration (not in wall clock time) and it\u2019s not-obvious how much overhead the Riemannian machinery impose.\n3) In general, one can decompose your contributions into two things: an optimization algorithm and the bidirectional TT LSTM. The optimization algorithm in turn consist in two parts: Riemannian optimization and rank adaptation. There should be ablation studies showing how much of the benefits come from using Riemannian optimization, and how much from using the rank adaptation after each iteration.\n\nAnd finally some typos / minor concerns:\n1) The sentence describing the other tensor decomposition is a bit misleading, for example CANDECOMP can also be scaled to arbitrary high dimensions (but as a downside, it doesn\u2019t allow for Riemannian optimization and can be harder to work with numerically).\n2) It\u2019s very hard to read the Riemannian section of the paper without good knowledge of the subject, for example concepts of tangent space, retraction, and exponential mapping are not introduced.\n3) In Def 2 \u201cdifferent function\u201d should probably be \u201cdifferentiable function\u201d.\n4) How is W_c represented in eq (25), as TT or not? It doesn\u2019t follow the notation of the rest of the paper. How is a_t used?\n5) What is \u201cscore\u201d in eq (27)?\n6) Do you include bias parameters into the total number of parameters in figures?\n7) The notation for tensors and matrices are confusingly similar (bold capital letters of slightly different font).\n8) There is no Related Work section, and it would be nice to discuss the differences between this work and some relevant ones, e.g. how is the proposed advanced TT RNN different from the TT LSTMs proposed in Yang et al. 2017 (is it only the bidirectional part that is different?) and how is the Riemannian optimization part different from Novikov et al. 2017 (Exponential machines), and what are the pros and cons of your optimization method compared to the method proposed in Imaizumi et al. 2017 (On Tensor Train Rank Minimization: Statistical Efficiency and Scalable Algorithm).\n\n\nPlease, do take this as a constructive criticism, I would be happy to see you resubmitting the paper after fixing the raised concerns!\n", "rating": "3: Clear rejection", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1544496128814, "meta_review_tcdate": 1544496128814, "meta_review_tmdate": 1545354498528, "meta_review_ddate ": null, "meta_review_title": "ICLR 2019 decision", "meta_review_metareview": "This paper proposes using a tensor train low rank decomposition for compressing neural network parameters. However the paper falls short on multiple fronts 1)lack of comparison with existing methods 2) no baseline experiments. Further there are concerns about correctness of the math in deriving the algorithms, convergence and computational complexity of the proposed method. I strongly suggest taking the reviews into account before submitting the paper it again. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper454/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1fPYj0qt7¬eId=Bkxt-4inkN"], "decision": "Reject"}