AMSR / conferences_raw /iclr19 /ICLR.cc_2019_Conference_B1MUroRct7.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
10.1 kB
{"forum": "B1MUroRct7", "submission_url": "https://openreview.net/forum?id=B1MUroRct7", "submission_content": {"title": "Online Learning for Supervised Dimension Reduction", "abstract": " Online learning has attracted great attention due to the increasing demand for systems that have the ability of learning and evolving. When the data to be processed is also high dimensional and dimension reduction is necessary for visualization or prediction enhancement, online dimension reduction will play an essential role. The purpose of this paper is to propose new online learning approaches for supervised dimension reduction. Our first algorithm is motivated by adapting the sliced inverse regression (SIR), a pioneer and effective algorithm for supervised dimension reduction, and making it implementable in an incremental manner. The new algorithm, called incremental sliced inverse regression (ISIR), is able to update the subspace of significant factors with intrinsic lower dimensionality fast and efficiently when new observations come in. We also refine the algorithm by using an overlapping technique and develop an incremental overlapping sliced inverse regression (IOSIR) algorithm. We verify the effectiveness and efficiency of both algorithms by simulations and real data applications.", "keywords": ["Online Learning", "Supervised Dimension Reduction", "Incremental Sliced Inverse Regression", "Effective Dimension Reduction Space"], "authorids": ["ningzhang0123@gmail.com", "qwu@mtsu.edu"], "authors": ["Ning Zhang", "Qiang Wu"], "TL;DR": "We proposed two new approaches, the incremental sliced inverse regression and incremental overlapping sliced inverse regression, to implement supervised dimension reduction in an online learning manner.", "pdf": "/pdf/d13a831ec42dbd62b5cf904145014c0bfc59695a.pdf", "paperhash": "zhang|online_learning_for_supervised_dimension_reduction", "_bibtex": "@misc{\nzhang2019online,\ntitle={Online Learning for Supervised Dimension Reduction},\nauthor={Ning Zhang and Qiang Wu},\nyear={2019},\nurl={https://openreview.net/forum?id=B1MUroRct7},\n}"}, "submission_cdate": 1538087742271, "submission_tcdate": 1538087742271, "submission_tmdate": 1545355431734, "submission_ddate": null, "review_id": ["SJl-FoFfTQ", "Bkxt5nOY2X", "SygE0dpS3Q"], "review_url": ["https://openreview.net/forum?id=B1MUroRct7&noteId=SJl-FoFfTQ", "https://openreview.net/forum?id=B1MUroRct7&noteId=Bkxt5nOY2X", "https://openreview.net/forum?id=B1MUroRct7&noteId=SygE0dpS3Q"], "review_cdate": [1541737336553, 1541143697129, 1540901067814], "review_tcdate": [1541737336553, 1541143697129, 1540901067814], "review_tmdate": [1541737336553, 1541534291505, 1541534291300], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1MUroRct7", "B1MUroRct7", "B1MUroRct7"], "review_content": [{"title": "Rebadging of Incremental Generalized Eigenvalue Decomposition.", "review": "This paper studies sufficient dimension reduction problem, and proposes an incremental sliced inverse regression algorithm. Numerical experiments are provided to demonstrate the effectiveness of the proposed algorithms.\n\nThe sliced inverse regression here is nothing but generalized eigenvalue decomposition:\n\nAx=lambda Bx.\n\nNote that Multiclass Fisher Linear Discriminant Analysis, Canonical Correlation Analysis, Nonlinear Manifold Embedding and many subspace learning methods can also be formulated as generalized eigenvalue decomposition. All these methods need to compute covariance-like matrices in the additive form, which makes incremental update very convenient.\n\nThe incremental generalized eigenvalue decomposition has been extensively studied for over decades, especially between 1995 and 2005 in the face recognition community. I am just listing a few here:\n\nYe et al., IDR/QR: An Incremental Dimension Reduction Algorithm via QR Decomposition, 2005\n\nLaw and Jain, Incremental Nonlinear Dimensionality Reduction by Manifold Learning, 2006\n\nYan et al. Towards incremental and large scale face recognition, 2011\n\nGhassabeh et al. A New Incremental Face Recognition System, 2007\n\nSong et al. A Novel Supervised Dimensionality Reduction Algorithm for Online Image Recognition, 2006.\n\nWang et al. Incremental two-dimensional linear discriminant analysis with applications to face recognition, 2010.\n\nSalman et al. Efficient update of the covariance matrix inverse in iterated linear discriminant analysis, 2010\n\nPark and Park, A comparison of generalized linear discriminant analysis algorithms, 2008\n\nWang, INCREMENTAL AND REGULARIZED LINEAR DISCRIMINANT ANALYSIS, 2012\n\nThese algorithms become less popular/known now, because (1) they are not scalable and efficient for large p, and (2) these classical dimensionality reduction methods perform poorly in many tasks, compared with the state of art results.\n\nThis paper only cites a few papers on incremental LDA, but does not even mention that both LDA and SIR are essentially solving similar optimization problems. Moreover, it does not compare the results with any of the above references, either.\n\nThis paper even claims applying the Sherman\u2013Morrison formula as the contribution. However, such an update has been used in Salman et al. 2010, Park and Park 2008, Wang 2012.\n\nIn summary, this paper is far below the bar of ICLR.\n\nMinor: There are numerous typos in this paper. The authors even misspell \"Morrison\" in the Sherman\u2013Morrison formula as \"Morison\".", "rating": "2: Strong rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "the algorithm is not efficient for large p", "review": "This paper proposes an online learning algorithm for supervised dimension reduction, called incremental sliced inverse regression (ISIR). The key idea is converting the SIR problem into PCA problem by using the inverse of covariance matrix. After the transformation, we can use incremental PCA to compute the top eigenvector and obtain the approximate solution of SIR in streaming way. The authors also extend ISIR to overlapping case.\n\nThe motivation of this paper is reasonable, but I have some concerns as follow.\n\n1. The computation of ISIR is dependent on maintaining the matrix \\hat \\Sigma\u2019 (or its inverse), which requires O (p^2) time and space. In my opinion, this complexity is too expensive for high dimensional datasets which makes the main result of this paper is not strong. Maybe we can use low-rank approximation and its variants to improve the efficiency.\n\n2. For large dataset, the covariance matrix may be ill-conditioned with more and more data arriving even we use warm start strategy at first. It is more reasonable to introduce a ridge term to make the algorithm more stable.\n\n3. The experiments only evaluate on some small datasets. It is not enough to show the advantage of the proposed algorithms. There are also many other strategy can be used into this problem such as random sampling, random projections and frequent directions, but this paper does not provide sufficient discussion.\n", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Algorithm directly motivated by online PCA", "review": "Sliced Inverse Regression is a well-known technique for finding EDR space in supervised dimension reduction problems, under condition that input X is normally disributed. When the number of dimensions is large and an access to observations is online, finding eigenvalues of covariance matrices (online) could become computationally costly. So, the paper focuses the problem of updating few principal components of covariance matrices as new examples come.\n\nThe key idea is that in classical SIR two problems are solved sequentially (estimation of Sigma = cov(X) and finding PCA of vectors Sigma^{-0.5}X_s where s corresponds to average X of s-th slice). The second part can be reduced to an online version of PCA (Peter M Hall et al.). \n\nThe paper seems consistent and experiments convince that the proposed method works. The lack of theoretical analysis is a disadvantage.\n\nAlso, some typos:\n1. \u0393b = \u03bb\u03a3b\u03b2 (p.4 par.3) --- beta omitted\n2. We can regard z0 = \u03a3^{-1/2}m'_k (p.4) --- bar over m omitted\n3. Formula 10 looks weird (Sigma instead of Sigma prime)\n", "rating": "6: Marginally above acceptance threshold", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1545025088268, "meta_review_tcdate": 1545025088268, "meta_review_tmdate": 1545354484607, "meta_review_ddate ": null, "meta_review_title": "Contribution unclear with respect to substantial relevant literature", "meta_review_metareview": "The paper investigates an incremental form of Sliced Inverse Regression (SIR) for supervised dimensionality reduction. Unfortunately, the experimental evaluation is insufficient as a serious evaluation of the proposed techniques. More importantly, the paper does not appear to contribute a significant advance over the extensive literature on fast generalized eigenvalue decompositions in machine learning. No responses were offered to counter such an opinion.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper91/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1MUroRct7&noteId=HJe_HIhEg4"], "decision": "Reject"}