{"forum": "B1esx6EYvr", "submission_url": "https://openreview.net/forum?id=B1esx6EYvr", "submission_content": {"title": "A critical analysis of self-supervision, or what we can learn from a single image", "authors": ["Asano YM.", "Rupprecht C.", "Vedaldi A."], "authorids": ["yuki@robots.ox.ac.uk", "chrisr@robots.ox.ac.uk", "vedaldi@robots.ox.ac.uk"], "keywords": ["self-supervision", "feature representation learning", "CNN"], "TL;DR": "We evaluate self-supervised feature learning methods and find that with sufficient data augmentation early layers can be learned using just one image. This is informative about self-supervision and the role of augmentations.", "abstract": "We look critically at popular self-supervision techniques for learning deep convolutional neural networks without manual labels. We show that three different and representative methods, BiGAN, RotNet and DeepCluster, can learn the first few layers of a convolutional network from a single image as well as using millions of images and manual labels, provided that strong data augmentation is used. However, for deeper layers the gap with manual supervision cannot be closed even if millions of unlabelled images are used for training.\nWe conclude that:\n(1) the weights of the early layers of deep networks contain limited information about the statistics of natural images, that\n(2) such low-level statistics can be learned through self-supervision just as well as through strong supervision, and that\n(3) the low-level statistics can be captured via synthetic transformations instead of using a large image dataset.", "pdf": "/pdf/83de939dbc3579dae2d5459efeb3e0f683c310d2.pdf", "paperhash": "ym|a_critical_analysis_of_selfsupervision_or_what_we_can_learn_from_a_single_image", "_bibtex": "@inproceedings{\nYM.2020A,\ntitle={A critical analysis of self-supervision, or what we can learn from a single image},\nauthor={Asano YM. and Rupprecht C. and Vedaldi A.},\nbooktitle={International Conference on Learning Representations},\nyear={2020},\nurl={https://openreview.net/forum?id=B1esx6EYvr}\n}", "full_presentation_video": "", "original_pdf": "/attachment/1e3633b05bb53566ebc3164d0cc380b51348e5cf.pdf", "appendix": "", "poster": "", "spotlight_video": "", "slides": ""}, "submission_cdate": 1569438963083, "submission_tcdate": 1569438963083, "submission_tmdate": 1583912031508, "submission_ddate": null, "review_id": ["Ske6OaZ-qr", "BkgPU07CFS", "r1x7498kcS"], "review_url": ["https://openreview.net/forum?id=B1esx6EYvr¬eId=Ske6OaZ-qr", "https://openreview.net/forum?id=B1esx6EYvr¬eId=BkgPU07CFS", "https://openreview.net/forum?id=B1esx6EYvr¬eId=r1x7498kcS"], "review_cdate": [1572048244994, 1571860047415, 1571936811339], "review_tcdate": [1572048244994, 1571860047415, 1571936811339], "review_tmdate": [1574388613313, 1574341547054, 1572972606311], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper351/AnonReviewer1"], ["ICLR.cc/2020/Conference/Paper351/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper351/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1esx6EYvr", "B1esx6EYvr", "B1esx6EYvr"], "review_content": [{"experience_assessment": "I have read many papers in this area.", "rating": "6: Weak Accept", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "title": "Official Blind Review #1", "review": "Update 11/21\nWith the additional experiments (testing a new image, testing fine-tuning of hand-crafted features), additions to related work, and clarifications, I am happy to raise my score to accept. Overall, I think this paper is a nice sanity check on recent self-supervision methods. In the future, I am quite curious about how these mono-image learned features would fare on more complex downstream tasks (e.g., segmentation, keypoint detection) which necessarily rely less on texture.\n\nSummary\nThis paper seeks to understand the role of the *number of training examples* in self-supervised learning with images. The usefulness of the learned features is evaluated with linear probes at each layer for either ImageNet or CiFAR image classification. Empirically, they find that a single image along with heavy data augmentation suffices for learning the first 2-3 layers of convolutional weights, while later layers improve with more self-supervised training images. The result holds for three state-of-the-art self-supervised methods, tested with two single-image training examples.\n\nIn my view, learning without labels is an important problem, and it is interesting what can be learned from a single image and simple data augmentation strategies. \n\nComments / Questions\nIt seems to me that for completeness, Table 4 should include the result of training a supervised network on top of random conv1/2 and Scattering network features, because this experiment is actually testing what we want - performance of the features when fine-tuned for a downstream task. So for example, even if a linear classifier on top of Scattering features does poorly, if downstream fine-tuning results in the same performance as another pre-training method, then Scattering is a perfectly fine approach for initial features. Could the authors please either correct this logic or provide the experiments?\nFurther, it seems that the results in Table 4 might be a bit obscured by the size of the downstream task dataset. I wonder if the learned features require fewer fully supervised images to obtain the same performance on the downstream task?\nCan the authors clarify how the neural style transfer experiment is performed? The method from Gatys et al. requires features from different layers of the feature hierarchy, including deeper layers. Are all these features taken directly from the self-supervised network or is it fine-tuned in some way?\nWhile I appreciate the computational burden of testing more images, it does feel that Image A and B are quite cherry-picked in being very visually diverse. Because of this, it seems like a precise answer to what makes a good single training image remains unknown. I wonder how feasible it is to find a proxy metric that corresponds to the performance on downstream tasks which is expensive to compute. It might be interesting to try to generate synthetic images (or modify real ones) that are good for this purpose and observe their properties.\nI disagree with the claim of practicality in the introduction (page 2, top). While training on one image does reduce the burden of number of images, the computational burden remains the same. And as mentioned above, it doesn\u2019t seem likely that *any* image would work for this method. Finally, more images are needed to learn the deeper layers for the downstream task anyway. \n\nThe paper is well-written and clear. \n", "review_assessment:_checking_correctness_of_derivations_and_theory": "N/A"}, {"experience_assessment": "I have published one or two papers in this area.", "rating": "6: Weak Accept", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "title": "Official Blind Review #3", "review": "The paper studies self-supervised learning from very few unlabeled images, down to the extreme case where only a single image is used for training. From the few/single image(s) available for training, a data set of the same size as some unmodified reference data set (ImageNet, Cifar-10/100) is generated through heavy data augmentation (cropping, scaling, rotation, contrast changes, adding noise). Three popular self-supervised learning algorithms are then trained on this data sets, namely (Bi)GAN, RotNet, and DeepCluster, and the linear probing accuracy on different blocks is compared to that obtained by training the same methods on the reference data sets. The linear probing accuracy from the first few conv layers of the network trained on the single/few image data set is found to be comparable to or better than that of the same model trained on the full reference data set.\n\nI enjoyed the paper; it addresses the interesting setting of an extremely small data set which complements the large number of studies on scaling up self-supervised learning algorithms. I think it is not extremely surprising that using the proposed strategy allows to learn low level features as captured by the first few layers, but I think it is worth studying and quantifying. The experiments are carefully described and presented, and the paper is well-written.\n\nHere are a few questions and concerns:\n\n- How much does the image matter for the single-image data set? The selected images A and B are of very high entropy and show a lot of different objects (image A) and animals (image B). How do the results change if e.g. a landscape image or an abstract architecture photo is used?\n\n- How general is the proposed approach? How likely is it to generalize to other approaches such as Jigsaw (Doersch et al., 2015) and Exemplar (Dosovitskiy et al., 2016)? It would be good to comment on this.\n\n- [1] found that the network architecture for self-supervised learning can matter a lot, and that by using a ResNet architecture, performance of SSL methods can be significantly improved. In particular, the linear probing accuracy appears to be often monotonic as a function of the depth of the layer it is computed from. This is in contrast to what is observed for AlexNet in Tables 2 and 3, where the conv5 accuracy is lower than the conv4. It would therefore be instructive to add experiments for ResNet to see how well the results generalize to other network architectures.\n\n- Does the MonoGAN exhibit stable training dynamics comparable to training WGAN on CIFAR-10, or do the training dynamics change on the single-image data set?\n\n\nOverall, I\u2019m leaning towards accepting the paper, but it would be important to see how well the experiments generalize to i) ResNet and ii) other (lower entropy) input images.\n\n[1] Kolesnikov, A., Zhai, X. and Beyer, L., 2019. Revisiting self-supervised visual representation learning. arXiv preprint arXiv:1901.09005.\n\n\n\n---\nUpdate after rebuttal:\nI thank the authors for their detailed response. I appreciate the efforts of the authors into investigating the issues raised, the described experiments sound promising. Unfortunately, the new results are not presented in the revision. I will therefore keep my rating.", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory."}, {"experience_assessment": "I have published in this field for several years.", "rating": "1: Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper thoroughly.", "review_assessment:_checking_correctness_of_experiments": "I carefully checked the experiments.", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "N/A", "review": "This paper explores self-supervised learning in the low-data regime, comparing results to self-supervised learning on larger datasets. BiGAN, RotNet, and DeepCluster serve as the reference self-supervised methods. It argues that early layers of a convolutional neural network can be effectively learned from a single source image, with data augmentation. A performance gap exists for deeper layers, suggesting that larger datasets are required for self-supervised learning of useful filters in deeper network layers.\n\nI believe the primary claim of this paper is neither surprising nor novel. The long history of successful hand-designed descriptors in computer vision, such as SIFT [Lowe, 1999] and HOG [Dalal and Triggs, 2005], suggest that one can design (with no data at all) features reminiscent of those learned in the first couple layers of a convolutional neural network (local image gradients, followed by characterization of those gradients over larger local windows).\n\nMore importantly, it is already well established that it is possible to learn, from only a few images, filter sets that resemble the early layers of filters learned by CNNs. This paper fails to account for a vast amount of literature on modeling natural images that predates the post-AlexNet deep-learning era.\n\nFor example, see the following paper (over 5600 citations according to Google scholar):\n\n[1] Bruno A. Olshausen and David J. Field. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature, 1996.\n\nFigure 4 of [1] shows results for learning 16x16 filters using \"ten 512x512 images of natural scenes\". Compare to the conv1 filters in Figure 2 of the paper under review. This 1996 paper clearly established that it is possible to learn such filters from a small number of images. There is long history of sparse coding and dictionary learning techniques, including multilayer representations, that follows from the early work of [1]. The paper should at minimum engage with this extensive history, and, in light of it, explain whether its claims are actually novel."}], "comment_id": ["r1lgQemhjB", "BJx7C1bfjH", "Hkl_Uybzor", "Syg3upgMjH"], "comment_cdate": [1573822487794, 1573158859427, 1573158736173, 1573158259960], "comment_tcdate": [1573822487794, 1573158859427, 1573158736173, 1573158259960], "comment_tmdate": [1573822487794, 1573158859427, 1573158736173, 1573158259960], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper351/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper351/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper351/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper351/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Final paper update", "comment": "We have updated our paper with the following main changes:\n\n* As suggested by R2, we adjusted the message of the paper to more accurately reflect the critical results of this paper and how they relate to previous hand-designed feature learning methods (esp. Sec. 2).\n* As requested by R3, we have provided an additional experiment on training on a single, less crowded, image with DeepCluster (in Sec. 4.3) and observe that it can still achieve almost the same performance as with the other photographic image.\n* We have also provided freeze-and-retrain experiments for the scattering transform and our single image trained networks (Tab. 4) and find that while the scattering transform does outperform random conv1-conv2, our CNNs trained self-supervisedly with one image still yield better performance.\n* Incorporated several smaller clarifications requested by the reviewers.\n* Due to the short rebuttal period, the experiment on training a ResNet has not yet finished evaluating and we will provide the finished results in the camera-ready version.\n"}, {"title": "Response to Review #3", "comment": "We thank the reviewer for their time and their clear understanding of the key aspects of the paper. We address the reviewer\u2019s questions in the following:\n\n> How much does the image matter for the single-image data set? \n\nThe reviewer raises an important point about the tested single images. Less crowded images could lead to many patches having no gradients (e.g. showing only the sky), leading to a failure of at least RotNet, if not also BiGAN on many samples of the augmented dataset. Our image choices were thus motivated by striving for simplicity and not further adding a pipeline that would, for example, extract only patches with sufficiently large image gradients. We are training DeepCluster now on a significantly less busy image and will report results in the coming days.\n\n\n> How general is the proposed approach? \n\nWe believe that this method will work well for pretext tasks that rely on learning via detecting and learning invariances, such as Exemplar [1], Colorization [2], and Noise-as-targets [3]. Methods such as Context [4] and Jigsaw [5] could potentially work less well as they would potentially easily find a way to cheat given the limited amount of original data of one image. However, as the authors note in the paper cited by the reviewer, the accuracy of a pretext task does not translate to downstream task performances, so even a method that is simple on one image\u2019s patches does not necessarily fail. \nThis is an interesting avenue for research and we hope that this paper could inspire follow-up work on this topic. \n\n\n> [1] found that the network architecture for self-supervised learning can matter a lot, and that by using a ResNet architecture, performance of SSL methods can be significantly improved.\n\nIndeed, the paper mentioned by the reviewer shows that the performance of various self-supervised methods for ResNets does not degrade with the depth as it does for VGG and AlexNets due to the skip-connections. However, as ResNets have not been originally used to train the methods analyzed in our paper, we have stayed in the bounds that are required for fair comparisons and only used AlexNet. We agree with the reviewer that it would be good to check if ResNets, in general, can also be trained in such a manner (e.g. could global pooling destroy the signal?), so we are running an experiment on a ResNet-18 and will report results in the upcoming days.\n\n\n> Does the MonoGAN exhibit stable training dynamics comparable to training WGAN on CIFAR-10, or do the training dynamics change on the single-image data set?\n\nMonoGAN trained without any exploding gradients or other problems frequently encountered by GANs. As we have suggested in the paper, this might be due to the fact that image-patches from one image follow a simpler distribution than in-the-wild images of a complete dataset.\n\n\u2014\n[1] A. Dosovitskiy et al. \"Discriminative unsupervised feature learning with exemplar convolutional neural networks.\" TPAMI 2015\n[2] R. Zhang et al. \"Colorful image colorization.\" ECCV 2016.\n[3] P. Bojanowski et al. \"Unsupervised learning by predicting noise.\" ICML 2017.\n[4] D. Pathak et al. \u201cContext Encoders: Feature Learning by Inpainting\". CVPR 2016.\n[5] M. Noroozi \"Unsupervised learning for visual representations by solving jigsaw puzzles.\" ECCV 2016"}, {"title": "Response to Review #2", "comment": "We hope that the reviewer will change his opinion once we clarify the goal of our paper and explain how it relates to prior work, as we believe we are fundamentally on the same page.\n\nWe are well aware of SIFT, HOG, the results of Olshausen and Field on learning image filters from a few example images (some of us are sufficiently old to have implemented all such methods from scratch as grad students!) and no annotations, as well as Mallat\u2019s Scattering nets [1]. In fact, we discuss and evaluate Oyallon\u2019s 2017 implementation [2] of this at page 5 and table 2 in the paper.\n\nHowever, the existence of these methods does not detract from the message of this paper. Our goal is to provide \u201ccritical analysis\u201d of current self-supervision methods because these *specific* tools are now very heavily researched. Our paper sends a cautionary message: current self-supervised learning techniques cannot improve on what can be obtained from a single image plus transformations for early layers in a network, and only improves in a limited manner for deeper layers, despite ingesting millions of images (which is touted as their key advantage). In particular, the claims are not limited to the first few layers as we show that one image recovers two thirds of the performance of deeper layers as well. This message, which is a partially negative result, stands on its own, regardless of whether good low-level features can be obtained in some other ways (e.g. manually) and, we hope the reviewer will agree, should be known by the community.\n\nNevertheless, we also agree with the reviewer that it is interesting to put these findings in a broader context, so we are happy to expand the discussion of prior feature learning/design work further. However, please note that none of this literature makes our specific findings on the limits of self-supervision obvious. Furthermore, although this is a little besides the point, in the paper we do show in Table 2 that scattering transforms works as well as conv1, but that from conv2 onwards self-supervision on a single image does better, so even the claim that handcrafted features are equivalent to the first few layers in deep networks is not proven. Also, the fact that Olshausens\u2019s filters resemble conv1 does not mean that they are equivalent to conv1 in recognition performance. \n\n\u2014\n[1] J. Bruna and S. Mallat. \"Invariant scattering convolution networks.\" TPAMI 2013\n[2] E. Oyallon, et al. \"Scaling the scattering transform: Deep hybrid networks.\" ICCV 2017"}, {"title": "Response to Review #1", "comment": "We thank the reviewer for their time and detailed reading of the paper. In the following, we address each of the reviewers comments:\n\n> Table 4 should include the result of training a supervised network on top of random conv1/2 and Scattering network features [\u2026] Scattering is a perfectly fine approach for initial features.\n\nOur aim is to investigate the \u201cpower\u201d (or lack thereof) of current self-supervision techniques when applied to standard deep network models. This is of interest because self-supervision is a hot topic of research.\n\nFinding whether e.g. the Scattering Transform can replace the first few layers of a network is interesting, for example to know if handcrafted features can also do as well as (self) supervision for the first few layers, but not susbtitutive of our core investigation (furthermore, we also look ad deeper layers, where these features are unlikely to be competitive). Still, such an experiment can help put our findings in context. This is why we do include them in Table 2 of the paper, where we show that scattering is not quite as good as even single-image self-supervision.\n\nWe do think that the suggestion of finetuning/retraining the rest of the model is also interesting after replacing the first few layers is also interesting. Still, we think that this can complement but not replace linear probing as the latter is a more direct way of finding what the probed layers can do. For instance, it is likely possible to learn a good network even by replacing the first layer with the identity function \u2014 it is just a slightly less deep model.\n\nFor these two reasons, we are running the requested experiments and we hope to be able to update Table 4 in the following days.\n\n> Can the authors clarify how the neural style transfer experiment is performed?\n\nIndeed, the method by Gatys et al. uses deeper layers as well, which we also use \u2014 straight from the self-supervised method, without fine-tuning or anything else. We will update the paper with these details. \n\n> While I appreciate the computational burden of testing more images, it does feel that Image A and B are quite cherry-picked in being very visually diverse. [...] It might be interesting to try to generate synthetic images (or modify real ones) that are good for this purpose and observe their properties.\n\nThank you, finding the best single training image, or finding useful synthetic images, are both very interesting ideas. While we are happy to consider doing so as a next step, it is next to impossible to do so in time for the rebuttal (we do not have access to thousands of GPUs).\n\nNevertheless, we would argue that the paper stands on by making some interesting observation on the ability of self-supervision to extract useful information from more than one (or few) images, and by investigating the role of data augmentation in this process. We hope that the reviewer will agree that the community will be interested in hearing about these findings.\n\n> I disagree with the claim of practicality in the introduction (page 2, top). While training on one image does reduce the burden of number of images, the computational burden remains the same. \n\nOur intention wasn\u2019t to say that we can save compute time, but data collection effort (which is also a practical issue in some applications). Nevertheless, we agree that our findings have mostly a theoretical value, so we have adjusted the wording to reflect that.\n\n> And as mentioned above, it doesn\u2019t seem likely that *any* image would work for this method.\n\nIt is true that we did not quite prove that, so we have reworded the text to tone down this claim.\n\nTo be a bit more specific, obviously a blank image would not work, and textureless images would probably not work well either. However, we did use in the paper the first two images we manually selected from Google Image Search (while we did select images with some texture, they have not been otherwise been optimized for good performance in our evaluation). Thus, we think that it is extremely likely that many other images would work just as well.\n\n> Finally, more images are needed to learn the deeper layers for the downstream task anyway.\n\nTrue, but even for deeper layers a single image achieves two thirds of the performance that self-supervision can squeeze out of a million images, which we think is interesting.\n"}], "comment_replyto": ["B1esx6EYvr", "BkgPU07CFS", "r1x7498kcS", "Ske6OaZ-qr"], "comment_url": ["https://openreview.net/forum?id=B1esx6EYvr¬eId=r1lgQemhjB", "https://openreview.net/forum?id=B1esx6EYvr¬eId=BJx7C1bfjH", "https://openreview.net/forum?id=B1esx6EYvr¬eId=Hkl_Uybzor", "https://openreview.net/forum?id=B1esx6EYvr¬eId=Syg3upgMjH"], "meta_review_cdate": 1576798693963, "meta_review_tcdate": 1576798693963, "meta_review_tmdate": 1576800941529, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "This paper studies the effectiveness of self-supervised approaches by characterising how much information they can extract from a given dataset of images on a per-layer basis. Based on an empirical evaluation of RotNet, BiGAN, and DeepCluster, the authors argue that the early layers of CNNs can be effectively learned from a single image coupled with strong data augmentation. Secondly, the authors also provide some empirical evidence that supervision might still necessary to learn the deeper layers (even in the presence of millions of images for self-supervision). \nOverall, the reviews agree that the paper is well written and timely given the growing popularity of self-supervised methods. Given that most of the issues raised by the reviewers were adequately addressed in the rebuttal, I will recommend acceptance. We ask the authors to include additional experiments requested by the reviewers (they are valuable even if the conclusions are not perfectly aligned with the main message).\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1esx6EYvr¬eId=Zpxizc-jUm"], "decision": "Accept (Poster)"}