AMSR / conferences_raw /iclr19 /ICLR.cc_2019_Conference_B1GSBsRcFX.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
10.5 kB
{"forum": "B1GSBsRcFX", "submission_url": "https://openreview.net/forum?id=B1GSBsRcFX", "submission_content": {"title": "Stop memorizing: A data-dependent regularization framework for intrinsic pattern learning", "abstract": "Deep neural networks (DNNs) typically have enough capacity to fit random data by brute force even when conventional data-dependent regularizations focusing on the geometry of the features are imposed. We find out that the reason for this is the inconsistency between the enforced geometry and the standard softmax cross entropy loss. To resolve this, we propose a new framework for data-dependent DNN regularization, the Geometrically-Regularized-Self-Validating neural Networks (GRSVNet). During training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss consistent with the geometry. We study a particular case of GRSVNet, the Orthogonal-Low-rank Embedding (OLE)-GRSVNet, which is capable of producing highly discriminative features residing in orthogonal low-rank subspaces. Numerical experiments show that OLE-GRSVNet outperforms DNNs with conventional regularization when trained on real data. More importantly, unlike conventional DNNs, OLE-GRSVNet refuses to memorize random data or random labels, suggesting it only learns intrinsic patterns by reducing the memorizing capacity of the baseline DNN.", "paperhash": "zhu|stop_memorizing_a_datadependent_regularization_framework_for_intrinsic_pattern_learning", "TL;DR": "we propose a new framework for data-dependent DNN regularization that can prevent DNNs from overfitting random data or random labels.", "authorids": ["zhu@math.duke.edu", "qiang.qiu@duke.edu", "wangbao@math.ucla.edu", "jianfeng@math.duke.edu", "guillermo.sapiro@duke.edu", "ingrid@math.duke.edu"], "authors": ["Wei Zhu", "Qiang Qiu", "Bao Wang", "Jianfeng Lu", "Guillermo Sapiro", "Ingrid Daubechies"], "keywords": ["deep neural networks", "memorizing", "data-dependent regularization"], "pdf": "/pdf/fbf3bad55a29c48cd74c3d470ebe5ccaeb147e61.pdf", "_bibtex": "@misc{\nzhu2019stop,\ntitle={Stop memorizing: A data-dependent regularization framework for intrinsic pattern learning},\nauthor={Wei Zhu and Qiang Qiu and Bao Wang and Jianfeng Lu and Guillermo Sapiro and Ingrid Daubechies},\nyear={2019},\nurl={https://openreview.net/forum?id=B1GSBsRcFX},\n}"}, "submission_cdate": 1538087740954, "submission_tcdate": 1538087740954, "submission_tmdate": 1545355418621, "submission_ddate": null, "review_id": ["rygOfXmTh7", "SylrVrMq2X", "H1eYROfJnX"], "review_url": ["https://openreview.net/forum?id=B1GSBsRcFX&noteId=rygOfXmTh7", "https://openreview.net/forum?id=B1GSBsRcFX&noteId=SylrVrMq2X", "https://openreview.net/forum?id=B1GSBsRcFX&noteId=H1eYROfJnX"], "review_cdate": [1541382927934, 1541182765265, 1540462801258], "review_tcdate": [1541382927934, 1541182765265, 1540462801258], "review_tmdate": [1541534298037, 1541534297837, 1541534297635], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1GSBsRcFX", "B1GSBsRcFX", "B1GSBsRcFX"], "review_content": [{"title": "Stop memorizing: A data-dependent regularization framework for intrinsic pattern learning", "review": "Previous works have shown that DNNs are able to memorize random training data, even ignoring the enforcing of data-dependent geometric regularization constraints. In this work, authors show convincing results indicating that this is due to a lack of consistency between the main classification loss (typically soft-max cross entropy) and the selected geometric constraint. Consequently, they propose a simple approach where the softmax loss is replaced by a validation loss that is consistent with the enforced geometry. Specifically, for each training batch, instead of considering a join loss (soft-max cross entropy + geometric constraint), they apply a sequential process, where each training batch is split into two sub-batches: a first batch used to apply the geometric constraint, and a second batch (based on the proposed feature geometry) where a validation loss is used to generate a predicted label distribution. Authors test the proposed idea using an implementation that enforces that samples from each class belong to an independent low-rank sub-space (enforced geometric constraint). Results verify the main hypothesis. Specifically, the resulting model is able to fit real data but not data with random labels. The strength of this evaluation is enhanced including results from relevant baselines. In terms of generalization of real data, the proposed approach offers a small increase in accuracy.\n\nPaper is well written, main hypothesis is relevant, and results are convincing. While not a complete answer to the main questions related to the abilities of DNNs to fit and generalize on real data, this paper offers relevant insights related to the role of finding/using a suitable loss function to train DNNs. These results are relevant to the community and they can illuminate future work, so this reviewer recommend to accept this paper.", "rating": "7: Good paper, accept", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Model has high bias and low variance ", "review": "The paper proposes a framework for data-dependent DNN regularization which claimed to be capable of producing highly discriminative features residing in orthogonal-low-rank subspaces. The main claim is that the proposed regularization makes the neural network not memorizing from the training data and motivate learning the intrinsic patterns. The experiments were done with three image dataset. \n\nThe main problem with this paper is the low training accuracy, but the high testing accuracy (Table 1). This implies that the model has a high bias and low variance. Intuitively the model is consistently predicting a wrong target function (probably from the self-validation).\n\n", "rating": "4: Ok but not good enough - rejection", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "paper well written, but limited in novelty and significance", "review": "The paper proposes a data-dependent regularization method which is coupled with softmax loss to train deep neural networks for classification. The paper turns to Orthogonal Low-rank Embedding (OLE) loss for the geometric constraint that one class of data/feature are assumed to reside on a low-rank subspace that subspaces of different classes are orthogonal ideally. The probability in the softmax is then modeled as cosine similarity between data feature and the class-specific subspaces. In this way, geometric loss and softmax loss have the common goal for optimization. Moreover, during training, the geometry enforced on one batch of features is simultaneously validated on a separate batch using a validation loss. The experiments seem to suggest such a model helps avoid overfitting/memorizing noisy training data. The paper reads well and is easy to follow.\n\nHowever, the paper is limited in technical novelty and practical significance. Here are some concerns -- \n\n1) The paper only studies one method based on OLE, though it cites the center loss [19]. How does the center loss behave in face of noisy training label? Would it also be able to refuse to fit the noisy training data?\n\n2) As each class has its own (low-rank) subspace, and the rank is reduced by imposing the nuclear norm. It seems that the proposed method is hard to extend to many classes (class number is larger than the dimension)?\n\n3) The datasets in the experiments are quite small in scale and class number. It is not persuasive unless tested on larger scale data or with large class number.\n\n4) The proposed method seems to be limited in deal with discrete labels (e.g., classification), is it easy to extend to continuous target, say regression problems like depth estimation and surface normal estimation?\n\n5) While the authors claim as a main contribution that the proposed GRSVNet is a general framework, it is hard to see how this framework can be used in other tasks other than classification.\n\n6) The experiments are less persuasive. It's better to add the error bar to see the improvement by the proposed method is not due to random initialization. Running time should also be compared, as nuclear norm seems to be time consuming.", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}], "comment_id": ["BkgVD6ZAAm", "B1lpKM6nAX"], "comment_cdate": [1543540059955, 1543455365152], "comment_tcdate": [1543540059955, 1543455365152], "comment_tmdate": [1543540059955, 1543455365152], "comment_readers": [["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper84/AnonReviewer3", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper84/Area_Chair1", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "https://openreview.net/forum?id=SJzvDjAcK7", "comment": "It works by removing the period."}, {"title": "link broken", "comment": "Thanks for the comments! The link you provided is missing. Could you give another link? Thanks!"}], "comment_replyto": ["B1lpKM6nAX", "Byeo_JvSCX"], "comment_url": ["https://openreview.net/forum?id=B1GSBsRcFX&noteId=BkgVD6ZAAm", "https://openreview.net/forum?id=B1GSBsRcFX&noteId=B1lpKM6nAX"], "meta_review_cdate": 1544251461874, "meta_review_tcdate": 1544251461874, "meta_review_tmdate": 1545354496488, "meta_review_ddate ": null, "meta_review_title": "meta-review", "meta_review_metareview": "The paper proposes an interesting data-dependent regularization method for orthogonal-low-rank embedding (OLE). Despite the novelty of the method, the reviewers and AC note that it's unclear whether the approach can extend other settings with multi-class or continuous labels or other loss functions. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper84/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1GSBsRcFX&noteId=SJgASuyt1V"], "decision": "Reject"}