AMSR / conferences_raw /iclr19 /ICLR.cc_2019_Conference_B1eCCoR5tm.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
8.92 kB
{"forum": "B1eCCoR5tm", "submission_url": "https://openreview.net/forum?id=B1eCCoR5tm", "submission_content": {"title": "Pseudosaccades: A simple ensemble scheme for improving classification performance of deep nets", "abstract": "We describe a simple ensemble approach that, unlike conventional ensembles,\nuses multiple random data sketches (\u2018pseudosaccades\u2019) rather than multiple classifiers\nto improve classification performance. Using this simple, but novel, approach\nwe obtain statistically significant improvements in classification performance on\nAlexNet, GoogLeNet, ResNet-50 and ResNet-152 baselines on Imagenet data \u2013\ne.g. of the order of 0.3% to 0.6% in Top-1 accuracy and similar improvements in\nTop-k accuracy \u2013 essentially nearly for free.", "keywords": ["Ensemble classification", "random subspace", "data sketching"], "authorids": ["me@nicklim.com", "bobd@waikato.ac.nz"], "authors": ["Jin Sean Lim", "Robert John Durrant"], "TL;DR": "Inspired by saccades we describe a simple, cheap, effective way to improve deep net performance on an image labelling task.", "pdf": "/pdf/a9ec0296dd47dbd5ad016d98de10f927212f77b7.pdf", "paperhash": "lim|pseudosaccades_a_simple_ensemble_scheme_for_improving_classification_performance_of_deep_nets", "_bibtex": "@misc{\nlim2019pseudosaccades,\ntitle={Pseudosaccades: A simple ensemble scheme for improving classification performance of deep nets},\nauthor={Jin Sean Lim and Robert John Durrant},\nyear={2019},\nurl={https://openreview.net/forum?id=B1eCCoR5tm},\n}"}, "submission_cdate": 1538087894218, "submission_tcdate": 1538087894218, "submission_tmdate": 1545355393829, "submission_ddate": null, "review_id": ["S1xYatNp3X", "HyexCkfjhm", "SylpauEq3Q"], "review_url": ["https://openreview.net/forum?id=B1eCCoR5tm&noteId=S1xYatNp3X", "https://openreview.net/forum?id=B1eCCoR5tm&noteId=HyexCkfjhm", "https://openreview.net/forum?id=B1eCCoR5tm&noteId=SylpauEq3Q"], "review_cdate": [1541388737076, 1541246920175, 1541191877053], "review_tcdate": [1541388737076, 1541246920175, 1541191877053], "review_tmdate": [1541533556601, 1541533556398, 1541533556193], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1eCCoR5tm", "B1eCCoR5tm", "B1eCCoR5tm"], "review_content": [{"title": "low technical novelty, but interesting results", "review": "Pros:\n-- Superior empirical results are the key highlights of this paper.\n-- The experiments are well designed and benchmarked against the state-of-the-art models.\n\nCons:\n-- One typically uses affine transformations of the training images to improve the performance of the CNN. From that perspective, the paper does not offer any new insight. I am not entirely convinced that this is a novel enough contribution to be accepted in ICLR.\n-- The \"ensemble of ensembles\" approach described in Section 3.5 is not clear. \n-- Overall, the paper does not have much novelty, but the results are quite promising.", "rating": "5: Marginally below acceptance threshold", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "numbers, comparisons, and pratical value", "review": "\tThis paper proposes a data ensemble method for image classification: sub sample an image, classify each sub sample, and vote those sub samples to get the final decision.\n\t\n\tQuestions:\n\t1. The validation accuracy of ResNet is much lower than that reported in the original ResNet paper, https://arxiv.org/pdf/1512.03385.pdf. For example, the top-1/5 accuracy of RestNet-50 is 79+/94+, which is only about 70+/89+% in this paper. Similarly, there is a big gap between the results of ResNet-152 reported in this paper and the original ResNet paper. Maybe I misunderstand something; otherwise, the results in this paper are not reliable.\n\t2. This work does not compare with any other data augmentation methods for testing, e.g., the widely used 10-crop test [ref1, ref2]: \u201cAt test time, the network makes a prediction by extracting five 224 \u00d7 224 patches (the four corner patches and the center patch) as well as their horizontal reflections (hence ten patches in all), and averaging the predictions made by the network\u2019s softmax layer on the ten patches.\u201d\n 3. A minor issue is the practical value of this work. If one can afford the computational cost of data ensemble test, why not train m (as the data ensemble in this paper) models and ensemble them given that ensemble of models usually brings more accuracy improvement? Note that the training of multiple models is conducted offline and the ensemble of models is of the same computation cost compared with the method in this paper. \n\n[ref1] Krizhevsky, Alex, Ilya Sutskever, and Geoffrey E. Hinton. \"Imagenet classification with deep convolutional neural networks.\" Advances in neural information processing systems. 2012.\n[ref2] He, Kaiming, et al. \"Deep residual learning for image recognition.\" Proceedings of the IEEE conference on computer vision and pattern recognition. 2016.\n", "rating": "4: Ok but not good enough - rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Pseudosaccades", "review": "The paper proposes a data augmentation technique where the input image is sub-sampled by randomly sampling rows and columns without replacement, which the authors call \u2018pseudosaccades\u2019. Rather than multiple classifiers, the authors ensemble using multiple \u2018pseudosaccades\u2019 as input, with the same network.\n\nComments:\nI think that the proposed augmentation is a neat trick. However, the inner-workings of the method are poorly presented (or not well understood). For eg. In section 3.5, while discussing the effects of the method on individual classes, the authors mention \u2018different architectures do tend to be affected by the pseudosaccades differently\u2019 and provide no further insights.\n\nThere are no experiments that compare this method with other standard data augmentation techniques. For instance, one could use a similar ensembling technique for transformations like shear, translation, rotation, etc. by randomly sampling their corresponding parameters. I would be interested in experimental results that compare the proposed ensemble with ensembles constructed using these common techniques.\n\nSince there is no reason for this technique to be used in isolation (I found no such motivation in the paper), it would be insightful to have experimental results where this technique is combined with the aforementioned standard augmentation techniques. Will this method\u2019s impact on the accuracy change with these other augmentations? (Ablation studies would be useful).\n\nThis is a a form of regularization and can be thought of reverse structured dropout. Also have the authors compared this with Cutout [1, 2]? Similar experiments and comparisons would be insightful.\n\n[1] Terrance DeVries and Graham W Taylor. Improved regularization of convolutional neural networks with cutout. arXiv preprint arXiv:1708.04552, 2017.\n[2] Zhun Zhong, Liang Zheng, Guoliang Kang, Shaozi Li, and Yi Yang. Random erasing data augmentation. arXiv preprint arXiv:1708.04896, 2017.\n\nIn summary:\nThe performance improvements are incremental. The paper lacks sufficient technical contribution. Further, it does not provide comparisons with standard techniques and similar augmentation methods to demonstrate the usefulness of the method. ", "rating": "4: Ok but not good enough - rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1544752751017, "meta_review_tcdate": 1544752751017, "meta_review_tmdate": 1545354517445, "meta_review_ddate ": null, "meta_review_title": "meta-review", "meta_review_metareview": "The paper proposes a data augmentation technique to ensemble classifiers.\nReviewers pointed to a few concerns, including a lack of novelty, a lack\nof proper comparison with state-of-the-art models or other data augmentation\napproaches.\nOverall, all reviewers recommended to reject the paper, and I concur with them.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper944/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1eCCoR5tm&noteId=HylP_0YggN"], "decision": "Reject"}