AMSR / conferences_raw /iclr19 /ICLR.cc_2019_Conference_B1e9csRcFm.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
23.9 kB
{"forum": "B1e9csRcFm", "submission_url": "https://openreview.net/forum?id=B1e9csRcFm", "submission_content": {"title": "The Importance of Norm Regularization in Linear Graph Embedding: Theoretical Analysis and Empirical Demonstration", "abstract": "Learning distributed representations for nodes in graphs is a crucial primitive in network analysis with a wide spectrum of applications. Linear graph embedding methods learn such representations by optimizing the likelihood of both positive and negative edges while constraining the dimension of the embedding vectors. We argue that the generalization performance of these methods is not due to the dimensionality constraint as commonly believed, but rather the small norm of embedding vectors. Both theoretical and empirical evidence are provided to support this argument: (a) we prove that the generalization error of these methods can be bounded by limiting the norm of vectors, regardless of the embedding dimension; (b) we show that the generalization performance of linear graph embedding methods is correlated with the norm of embedding vectors, which is small due to the early stopping of SGD and the vanishing gradients. We performed extensive experiments to validate our analysis and showcased the importance of proper norm regularization in practice.", "keywords": ["Graph Embedding", "Generalization Analysis", "Matrix Factorization"], "authorids": ["gaoyihan@gmail.com", "czhang82@illinois.edu", "jianpeng@illinois.edu", "adityagp@illinois.edu"], "authors": ["Yihan Gao", "Chao Zhang", "Jian Peng", "Aditya Parameswaran"], "TL;DR": "We argue that the generalization of linear graph embedding is not due to the dimensionality constraint but rather the small norm of embedding vectors.", "pdf": "/pdf/2af1d89a384f3f95739d6fbe99411dd6150abfb1.pdf", "paperhash": "gao|the_importance_of_norm_regularization_in_linear_graph_embedding_theoretical_analysis_and_empirical_demonstration", "_bibtex": "@misc{\ngao2019the,\ntitle={The Importance of Norm Regularization in Linear Graph Embedding: Theoretical Analysis and Empirical Demonstration},\nauthor={Yihan Gao and Chao Zhang and Jian Peng and Aditya Parameswaran},\nyear={2019},\nurl={https://openreview.net/forum?id=B1e9csRcFm},\n}"}, "submission_cdate": 1538087826383, "submission_tcdate": 1538087826383, "submission_tmdate": 1545355433169, "submission_ddate": null, "review_id": ["SJxaEJz1pQ", "ryelbu0U37", "BygctM2I3Q"], "review_url": ["https://openreview.net/forum?id=B1e9csRcFm&noteId=SJxaEJz1pQ", "https://openreview.net/forum?id=B1e9csRcFm&noteId=ryelbu0U37", "https://openreview.net/forum?id=B1e9csRcFm&noteId=BygctM2I3Q"], "review_cdate": [1541508916733, 1540970487774, 1540960898255], "review_tcdate": [1541508916733, 1540970487774, 1540960898255], "review_tmdate": [1541533889826, 1541533889619, 1541533889418], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1e9csRcFm", "B1e9csRcFm", "B1e9csRcFm"], "review_content": [{"title": "The Importance of Norm Regularization in Linear Graph Embedding: Theoretical Analysis and Empirical Demonstration", "review": "In this paper, the authors proved that the generalization error of linear graph embedding methods is bounded by the norm of embedding vectors, rather than the dimensionality constraints. Interestingly, along with the analysis of Levy & Goldberg (2014), they found that linear graph embedding methods are probably computing a low-norm factorization of the PMI matrix. Correspondingly, experimental results are provided to support their analysis.\nOverall, this work is theoretically complete and experimentally sufficient.\n\n1. it is unclear whether the embedding dimensions of all cases (with varying value of \\lambda_r) are fixed as a constant in Fig. 1 - Fig. 3.\n\n2. Figure 4 shows the impact of embedding dimension on the generalization performance. Are these results obtained after 50 SGD epochs? Comparing Fig.4 (a) with Fig.3 (a), we may infer that the results in Fig.3(a) when \\lambda_r = 0 are obtained by setting the embedded dimension as about 10^2. How about the generalization performance during SGD for \\lambda_r = 0 if the embedded dimension is set to be smaller than 10?\n\n3. In Claim 1, the degree d and the dimension D are mixed. ", "rating": "7: Good paper, accept", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "The importance of norm term in generalization bound over-emphasized", "review": "The manuscript proposes a theoretical bound on the generalization performance of learning graph embeddings. The authors find that the term in the generalization bound that represents the function complexity involves the norm of the learnt coordinates, based on which they argue that it is the norm of the coordinates that determines the success of the learnt representation.\n\nI am not very familiar with the literature on graph embeddings; however, to the extent of my understanding of the paper, I have a number of concerns:\n\n- In a generalization bound like in Theorem 1, it is very typical of the generalization error to include a term that represents the complexity of the hypothesis function class. In the presented result, this would be the second term on the rhs, which involves the spectral norm of the adjacency matrix and the bounds on the norm of the learnt coordinates. This term captures the Rademacher complexity of the hypothesis function class. In my understanding, there is nothing really surprising about this: most of the results in learning theory would include a term directly or indirectly related to some norm on the hypothesis function class. However, it would then require a lot of further justification to conclude that the key factor determining the performance is the norm of the learnt representation based on this.\n\n- I am not sure if the results in Figure 1 provide a really meaningful justification about the importance of norm. It is observed that the norm increases during the epochs, however, shouldn\u2019t we be also checking the evolution of the error at the same time to draw a meaningful conclusion? In particular, the norm seems to increase rather monotonically throughout the epochs, whereas we expect the error to decrease first, reach an optimal, and then start increasing due to overfitting. So can we really say that the error is proportional to the norm?\n\n- Similarly, in the results in Figure 5, we can observe that the regularization coefficient has an optimal value that maximizes the precision. On the other hand, the norm of the learnt coordinates is expected to decrease monotonically with increasing lambda_r. Again, it seems difficult to conclude that the norm is the key factor determining the error.\n\n- Minor comments: \n1. In page 2, in the expression of L, the node u should be in set V, I guess.\n2. Please define the function sigma used in the objective functions. \n3. Typo right under Section 3 title: \u201cgrpah\u201d\n4. The definition of matrix A_sigma is not clear to me. What does the \u201cthere exists y\u201d expression mean in the first line?\n\n- To sum up, my feeling is that the presence of the term involving the norm in Theorem 1 is rather classical in learning theory, and its importance seems to be over-emphasized in this study. Moreover, I am not fully convinced about the experimental evidence. Therefore, I cannot recommend accepting this paper. ", "rating": "4: Ok but not good enough - rejection", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "The paper needs more clarification on the implication from the theoretical results as well as empirical results", "review": "The main contribution of this paper is that it shows both theoretically and empirically that in linear graph embeddings, the generalization error is bounded by the norms of the embedding vectors rather than the dimension of the embedding vectors. \n\nThe list of my concerns or cons of this paper is:\n\n- For the main theorem, i.e., Theorem 1, \na.) why is it intuitive that the size of the training dataset required for learning a norm constrained graph embedding is O(C|A|_2). This is not that intuitive to me. Later, the authors argue that graphs are usually sparse and average node degree is usually smaller than the embedding size, thus it is easily overfitting the training data. However, I would say, in practice, the positive training pairs are not restricted to 1-hop neighbors, but could also be 2 or more hops, in that case, it won't easily overfit. \nb.) the main result from the theorem is that the error gap of norm constrained embeddings scales as O(d^-0.5(lnn)0.5), but I did not see how this is related to the norms of the embedding vectors and how is this evidenced in the empirical studies? It might be better to show a plot of error gap vs. d and/or n. \nc.) how is this analysis related to the later claim that \\lambda_r controls the model capacity of linear graph embedding?\n\n- The linear graph embedding framework considered in this paper assumes that each node only has one set of embeddings, but in practice, one node usually has two sets of embeddings as context node or a center node. How would this affect the whole analysis and claims?\n\n- How would the claims or analysis in this paper be generalized to non-linear graph embedding frameworks?\n\n- For the experiments, \na.) In Figure 3, the y label of (b) is missing, and the Average L2 norm of (c) cannot reflect the Generalization performance \nb.) In Figure 4(a), why after overfitting, we can still observe that the test accuracy increases?\nc.) In Figure 5, why the test precision first increases and then decrease with more regularization?", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["HkxfBSK3A7", "HklZ5lNKAX", "Hyg2NA6DCQ", "r1g1RTpvAQ", "rJg7ST6PAQ"], "comment_cdate": [1543439674149, 1543221384545, 1543130675960, 1543130567217, 1543130427236], "comment_tcdate": [1543439674149, 1543221384545, 1543130675960, 1543130567217, 1543130427236], "comment_tmdate": [1543441083580, 1543221384545, 1543130675960, 1543130567217, 1543130427236], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper560/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper560/AnonReviewer2", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper560/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper560/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper560/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Regarding the theoretical and empirical results", "comment": "Thanks for the comment. We agree that from learning theory point of view, the techniques and results in Theorem 1 is pretty standard and does not tell anything deep. However, the point here is that its implications contradict the mainstream opinion that dimensionality determines everything, which warrants attention from the graph embedding researchers. In fact, we never argued that norm determines the performance of graph embedding from Theorem 1 alone, it is more based on empirical results. \n\nRegarding the empirical results, we have explained that the bell curve (best performance is obtained at an intermediate lambda_r value) is exactly what we should expect from Theorem 1. Cite my earlier comment (from point 1):\n\n\"Therefore, the implication of Theorem 1 is that there should be an optimal value for the regularization coefficient lambda_r: lambda_r being too small could lead to overfitting, while being too large would make it difficult for the embedding vectors to fit the training data. As a result, the generalization error should exhibit a bell curve when we vary the lambda_r value, which is exactly what we observed in Figure 5.\"\n\nThus, I'm confused about the last comment \"however, the best performance is obtained at an intermediate lambda_r value, which seems to contradict the claim of the authors.\" Can you explain a little bit more about it?"}, {"title": "The norm term is classical in learning theory, it's not specific to this problem", "comment": "Having read the response from the authors, I still do not agree with their interpretation of their results.\n\n- In Theorem 1, imagine that for a given embedding, the embedding coordinates x's are all multiplied by a factor, say alpha, without changing anything else. In this case, everything will be trivially scaled by alpha^2: the expected loss (LHS of the inequality) will be multiplied by alpha^2, as well as the empirical loss (first term on the RHS). Then of course the deviation between the two will also be scaled by alpha^2, which is what the second term on the RHS tells us. I am not convinced that the result of the authors really tells something deeper than this trivial observation as far as the embedding norm is concerned. In particular, the Rademacher complexity of any function family scales by alpha, if the learnt functions are allowed to scale by alpha. This holds for any learning problem and it is not specific to the graph embedding problem the authors have studied. So I am not comfortable with the interpretation of the authors, saying that it is the \"norm\" that determines the performance of graph embedding. \n\nHaving said this, my point is not to underestimate the impact of Theorem 1; I am not familiar enough with the literature on graph embeddings to judge this. It's just the \"norm interpretation\" that does not make sense to me. If this study is the first one to provide a dimensionality-independent bound, maybe this should have been the main focus of the paper instead. \n\nAbout the response of the authors regarding the experimental results: An experiment where the norm and the test error are studied on the same data sets and shown to have a nontrivially similar behavior, by varying e.g. lambda_r, would provide more solid evidence for the claim of the authors. In particular, comparing Figure 1(a) and 3(a) obtained on the same data set, we see that the smallest norm is given by for largest lambda_r, however, the best performance is obtained at an intermediate lambda_r value, which seems to contradict the claim of the authors."}, {"title": "Clarification on the implication of theoretical results and empirical results", "comment": "Thanks for the comments, and we are sorry that our paper caused some confusion regarding the implications of theoretical analysis and empirical results. We will clarify these issues in the following:\n\n1. The main result of Theorem 1 bounds the generalization error of linear graph embedding in the form of training error plus the gap term. Therefore, to keep the generalization error small, we need to ensure that both the training error term and the gap term should be small. Note that the gap term is C|A|_2/(m+m\u2019), and for this term to reasonably small (< some epsilon), (m + m\u2019) need to be greater than C|A|_2/epsilon. Therefore, Theorem 1 suggest that the required sample size for learning norm constrained linear graph embedding is at least O(C|A|_2), and otherwise the gap term would be too large and we can potentially experience overfitting. However, this estimate is not the most important implication of Theorem 1, see the comment below for details.\n\n2. The most important implication of Theorem 1 is that it outlines the importance of proper regularization of embedding norm: note that C is the sum of the squared norm of embedding vectors. Therefore, if C is too small (that is, we used too strong norm regularization), then it will be impossible for the embedding vectors to fit the training data well enough that the training error is small; on the other hand, if C is too large (the norm regularization being too weak), then the gap term would be large as well, and we will likely see the embedding vectors to overfit the training data.\n\n3. In general, if we vary the regularization coefficient lambda_r, the generalization performance should exhibit a bell curve, and the optimal performance is obtained by choosing the most proper value of lambda_r that balances the two terms above. This theoretical analysis is later supported by our experimental results: in particular in Figure 5, we see the exact behavior as we predicted above.\n\n4. The implication of Claim 1 is that if we do not regularize norm at all, then in theory the embedding vectors could arbitrarily overfit the training data. However, this is not what people have been empirically observing in the past: even in LINE (Tang et al., 2015) where only immediate neighbors are used as positive pairs, the embedding vectors still exhibit good generalization performance. This contradicting behavior lead us to suspect that SGD optimization procedure would naturally bound the norm of embedding vectors. In Section 3.2, we verified this theory through experiments. \n\n5. Our analysis in Theorem 1 works for the cases where each node has two sets of embeddings: as we explained in the footnote in page 2, the directed case can be handled by associating each node with embedding vectors, which is equivalent as learning embeddings on undirected bipartite graph. The cases where we use different embeddings for context node and center node can be handled similarly and thus our analysis still holds.\n\n6. Our analysis does not generalize to non-linear graph embedding frameworks: most non-linear graph embedding frameworks involve using multi-layer neural networks (see related work discussion in appendix for details), which is fundamentally difficult to analyze. It is possible that norm regularization also plays important roles in those frameworks, but to confirm that requires additional investigations, which is out of the scope of this paper.\n\nWe have rewrote the paragraph after Theorem 1 to clarify the implications of this theorem, which hopefully will make things clearer. We have also corrected all the typos that has been pointed out. All the changes made in the paper are marked in red. Thanks again for the comments."}, {"title": "Clarification on the implication of the theoretical results and the corresponding changes to the paper", "comment": "Thanks for carefully reading through our paper and providing a lot of feedback. However, there seems to be a few misunderstandings regarding the implications of our theoretical analysis, which might have affected your judgement of our paper. We will try to clarify these points in the following:\n\n1. Theorem 1 states that the gap term (2nd term on RHS in Eqn (2)) is determined by the embedding norm (through C_U and C_V). However, the generalization error (LHS) is not only affected by the gap term, but the training error term (1st term on RHS) as well. Note that the norm of embedding vectors affect both the gap term and the training error term, as allowing the embedding vectors to have larger norm could potentially make them better fit the training data (and thus decreasing the training error). Therefore, the implication of Theorem 1 is that there should be an optimal value for the regularization coefficient lambda_r: lambda_r being too small could lead to overfitting, while being too large would make it difficult for the embedding vectors to fit the training data. As a result, the generalization error should exhibit a bell curve when we vary the lambda_r value, which is exactly what we observed in Figure 5.\n\n2. Figure 1-3 demonstrates various statistics (vector norm, norm of gradient, and generalization AP) throughout the course of SGD optimization, and should be looked at together (the x-axis being the number of epochs and have the same range). In particular, Figure 1 and Figure 3 collectively suggest that the generalization error is determined by vector norm. Note that Figure 1 by itself only lead to the conclusion that SGD results in small vector norm, and our final conclusion that generalization error is determined by vector norm is only obtained after Figure 3 (in page 6). \n\n3. The results in Theorem 1 is actually very surprising in the context of graph embedding: as we explained in Section 2.3, people have historically believed that linear graph embedding methods are computing low-rank factorization of the PMI matrix, and thus their belief is that the embedding dimension is the key factor for generalization, which have caused them to neglect the norm regularization in many cases. However, our generalization bound does not involve the embedding dimension term at all, which opens up the possibility that the embedding norm could be the key factor instead, while embedding dimension does not matter at all. \n\nThanks again for the helpful review, which makes us realize that we failed to make the implications of Theorem 1 very clear. We have rewrote the paragraph after Theorem 1 in Section 3.1 to address this issue. We also addressed all the minor issues pointed out in the comments. All the changes made in the paper are marked in red."}, {"title": "Changes made to the paper to address the minor issues", "comment": "Thanks for the comments. We have addressed all the minor issues that have been pointed out, and all the changes in the paper are marked in red. For the generalization AP performance during SGD with lambda_r=0 and small embedding dimension D, we didn\u2019t include that result due to page limitation. If I recall correctly, the trend is similar to other experiments: the training AP keeps improving during the whole procedure, while the testing AP peaks after a certain number of epochs and slightly drops afterwards."}], "comment_replyto": ["HklZ5lNKAX", "r1g1RTpvAQ", "BygctM2I3Q", "ryelbu0U37", "SJxaEJz1pQ"], "comment_url": ["https://openreview.net/forum?id=B1e9csRcFm&noteId=HkxfBSK3A7", "https://openreview.net/forum?id=B1e9csRcFm&noteId=HklZ5lNKAX", "https://openreview.net/forum?id=B1e9csRcFm&noteId=Hyg2NA6DCQ", "https://openreview.net/forum?id=B1e9csRcFm&noteId=r1g1RTpvAQ", "https://openreview.net/forum?id=B1e9csRcFm&noteId=rJg7ST6PAQ"], "meta_review_cdate": 1545056062708, "meta_review_tcdate": 1545056062708, "meta_review_tmdate": 1545354483360, "meta_review_ddate ": null, "meta_review_title": "Technically correct but lacking in sufficiently interesting insights", "meta_review_metareview": "This paper provides a generalization analysis for graph embedding methods concluding with the observation that the norm of the embedding vectors provides an effective regularization, more so than dimensionality alone. The main theoretical result is backed up by several experiments. While the result appears to be correct, norm control, dimensionality reduction and early stopping during optimization are all very well studied in machine learning as effective regularizers, either operating alone or in conjunction. The regularization parameters, iteration count, embedding dimensionality is typically tuned for an application. The AC agrees with Reviewer 2 that the paper does not provide sufficiently interesting insights beyond this observation and is unlikely to influence practical applications of these methods. Both reviewer 2 and 3 have also raised points on the need for stronger empirical analysis.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper560/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1e9csRcFm&noteId=BkxvHyVBgN"], "decision": "Reject"}