{"forum": "BJlVNY8llV", "submission_url": "https://openreview.net/forum?id=BJlVNY8llV", "submission_content": {"title": "Hybrid Rotation Invariant Networks for small sample size Deep Learning", "authors": ["Alexander Katzmann", "Marc-Steffen Seibel", "Alexander M\u00fchlberg", "Michael S\u00fchling", "Dominik N\u00f6renberg", "Stefan Maurus", "Thomas Huber", "Horst-Michael Gro\u00df"], "authorids": ["alexander.katzmann@siemens-healthineers.com", "marc-steffen.seibel@tu-ilmenau.de", "alexander.muehlberg.ext@siemens-healthineers.com", "michael.suehling@siemens-healthineers.com", "dominik.noerenberg@med.uni-muenchen.de", "stefan.maurus@med.uni-muenchen.de", "thomas.huber@med.uni-muenchen.de", "horst-michael.gross@tu-ilmenau.de"], "keywords": ["rotational invariance", "regularization", "colorectal cancer", "pancreatic cancer", "liver lesion segmentation"], "TL;DR": "A method for inherently invariant convolutional neural networks for small sample size deep learning and its application on medical imaging data.", "abstract": "Medical image analysis using deep learning has become a topic of steadily growing interest. While model capacity is continiously increasing, limited data is still a major issue for deep learning in medical imaging. Virtually all past approaches work with a high amount of regularization as well as systematic data augmentation. In explorative tasks realistic data augmentation with affine transformations may not always be possible, which prevents models from effective generalization. Within this paper, we propose inherently rotationally invariant convolutional layers enabling the model to develop invariant features from limited training data. Our approach outperforms classical convolutions on the CIFAR-10, CIFAR-100, and STL-10 datasets. We show the transferability to clinical scenarios by applying our approach on oncologic tasks for metastatic colorectal cancer treatment assessment and liver lesion segmentation in pancreatic cancer patients.", "code of conduct": "I have read and accept the code of conduct.", "pdf": "/pdf/2bf69d0f618686d7208690d5ef47c893a01bff70.pdf", "paperhash": "katzmann|hybrid_rotation_invariant_networks_for_small_sample_size_deep_learning"}, "submission_cdate": 1544739116503, "submission_tcdate": 1544739116503, "submission_tmdate": 1545069843102, "submission_ddate": null, "review_id": ["rkxN6dt2X4", "H1x3LN82QN", "Sklp8qhBQN"], "review_url": ["https://openreview.net/forum?id=BJlVNY8llV¬eId=rkxN6dt2X4", "https://openreview.net/forum?id=BJlVNY8llV¬eId=H1x3LN82QN", "https://openreview.net/forum?id=BJlVNY8llV¬eId=Sklp8qhBQN"], "review_cdate": [1548683451884, 1548670035758, 1548237396751], "review_tcdate": [1548683451884, 1548670035758, 1548237396751], "review_tmdate": [1548856756129, 1548856752485, 1548856720680], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper111/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper111/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper111/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BJlVNY8llV", "BJlVNY8llV", "BJlVNY8llV"], "review_content": [{"pros": "The paper identifies import problems in AI based (medical image) analysis, and presents novel idea to deal with the notion of rotation invariance. The idea is inspired by the success of ORB (Oriented fast and Rotational Brief), and provides an interesting approach to equip CNNs with a method to deal with rotation invariance/equivariance. The experimental results are in favor of the proposed CNNs over classical CNNs.", "cons": "This section contains all minor and major comments and suggestions.\n\nOverall, I do not recommend to accept this submission based on the following: \n\u2013 citations are sometimes incorrect or missing and I find some statements in the manuscript to be disrespectful \n\u2013 the authors makes claims which are not supported by the paper, nor by citations \n\u2013 the core method itself is not clearly described (I\u2019m left with a lot of unanswered questions).\n\n[Introduction]\n\nThe introduction starts good, it is quite ambitions and identifies some core challenges in AI based image analysis: \u201c\u2026 there is still a lack of knowledge on the emergence of concrete interactions within the network\u201d, \u201c..it might be advisable to optimize the network\u2019s choice of transformation itself..\u201d. However, I do not see why some of the challenges are mentioned, or how these are solved. In fact, it left me a bit confused and it gave me the impression that it was somewhat contradictive. A lot of it regards invariance to unknown transformations and it is suggested that you should not assume too many invariances a priori. It is suggested that in this paper these problems are addressed on a generic level, however, I found the paper to be highly specific to rotation invariance only (an a priori choice), which contradicts the grant view posed in the paragraphs before.\n\nThe paragraph on encoding rotation invariance into NNs is a bit weak. It misses some key publications. For a very recent overview see e.g. [1]. Related work on spatial transformer networks (see e.g. [2]) and group theoretical approaches are missing. For theory see e.g. [1,3,4]. For successful applications (and theory) in medical imaging see e.g. [5,6], in addition to the ones you already have. Furthermore, when citing work on steerable filters in CNNs I think [7] deserve an explicit mention as well. \n\nI also don\u2019t think it is correct to say that Weiler et al. 2017 is based on the work of Jacob and Unser (they are not even cited in Weiler et al. 2017). Jacob and Unser have made great impact in the computer vision field with steerable filters, and I think they do deserve a cite, but when doing so, I think it is fair to also acknowledge the ones that actually came up with the notion of steerable filters in computer vision in 1991: Freeman and Adelson [8]. \n\nI also found the statement in the last paragraph on page 2 dubious: \u201cWhile a) *unintentionally* reduces the effective network capacity\u2026\u201d. What is meant here? As far as I understand from the above mentioned methods is that the transformation invariances/equivariances are explicitly made part of the network architecture such that the networks do not need to learn for geometric relations. E.g., the network does not need to learn rotated copies and network capacity becomes available for learning tasks specific representations, thereby increasing performance. I think this effect is neither \u201cunintentional\u201d nor does it reduce network capacity, it in fact increases it.\n\n\n[2. Materials and methods]\n\n\n(punctuation is missing, in particular (3) and (4) should end with a \u201c.\u201d)\n\nTypo on page 3 \u201cthe belief that data amount might\u201d, \u201cdata amount\u201d->\u201dthe amount of data\u201d?\n\nThe last sentence of the intro of 2: \u201cWith our approach \u2026 a matter of the network optimization process\u201d? I don\u2019t understand this statement. From my point of view it is more a matter of network design than a network optimization process.\n\nThe sentence before (3) is logically not correct. The moments do not allow for rotation (you can rotate patches without any need for moments). A representative orientation can be derived using the moments and this orientation can be used to rotate the patches.\n\nEq. (3) describes a way of estimating a global patch orientation. I expect however that this angle is highly sensitive to global (low-freq) intensity variations. The given angle is essentially the angle that the center of mass makes w.r.t. the origin. If this center and the origin coincide this angles does not even exist, and if they are close I expect a high uncertainty on the angle. \n\n\u201cgeneralization to the n-dimensional case is straightforward\u201d, such statements should not be made lightly unless you have a good citation to support this. Already in the case n=3 you\u2019d have to make decisions in how to couple orientations (in S^2, which has 2 parameters) with rotations (in SO(3) which has three parameters).\n\n\n\u201cFurther approaches could include a learned transformation\u201d. Also here I think such extensions are not trivial at all and do not follow directly from the work presented here. I have several problems with this: \n1.\tYou do not provide any details to support this statement (except for a reference to section 4 which basically repeats this statement)\n2.\tYou could of course try to learn this transformation matrix (which is essentially happening in spatial transformer networks (STNs) [2]), but I understood that the whole point of using matrix (4) was that it is parameterized by a rotation which you \u201cknow\u201d (or at least are able to estimate), you cannot estimate more general parameters in the same way you estimate orientation.\n\n\nEq. (5) gives a description of the rotationally invariant layer, which includes a convolution of weight matrix W with an image patch. To me this equation is confusing since the difference in notation between weights (capitalized) and patches (lowercase) suggest that they are different data types. You could clarify (5) by also mentioning the size of W (which is k x k?). Am I right that (5) specifies a fully connected layer? (the patches have the same size as W)\n\n\nLet\u2019s assume p is larger than the convolution kernels. Then I have a problem with interpreting the framework: The rotation invariance is global (first the entire patch is rotated and then a standard conv is applied). In the next layer, the full input is rotated and then again a conv is applied. How does this work on full image level, e.g. in the U-nets? If now again the full image is rotated then you completely destroy locality property and structure of the convolutions. E.g. a rotation of 180 degree\u2019s moves a pixel all the way to the other side of the image, and if the feature maps are rotated independently of each other you completely miss spatial correspondences. I am probably missing something here, but this is as much I can make from the presented methodology. A clearer, less ambiguous, explanation of the methodology would have been helpful.\n\n\nThis brings me to my next question. You choose to go for a hybrid approach. What happens if you go fully rotation invariant? (non-hybrid)\n\n\nWhat happens if you do not additionally provide the values sin \\theta(x,y), cos\\theta(x,y) as extra feature maps? Why are the orientations position dependent? Eq. (3) describes a global orientation estimate (independent of x and y). Perhaps the orientations are indeed determined at each pixel location, but then how would you define the convolution of (5); maybe rotate each kernel locally? This would make the conv layer highly non-linear and computationally very expensive. Either way, essential details are missing\u2026\n\n\nSection 2.3.1 on page 6, \u201cfor three iterations with varying\u201d. Perhaps you can rewrite this sentence since \u201citeration\u201d typically refers to one optimization step. Perhaps you can say, we repeated each experiment 3 times with different initializations (at least if this is what is meant here, otherwise I don\u2019t understand the sentence).\n\n\n[4. Discussion]\n\n\u201cThough there generally is some understanding on how rotational invariance can be realized\u2026\u201d. This sentence undermines the work by many others in this direction. The last couple of years has seen quite some contributions to rotationally invariant and equivariant networks (see cites below). The general theory is well understood from a group theoretical point of view [1,2,3] and it has seen great success in medical imaging (see the cites in your manuscript and e.g. [4,5]) in terms of performance, network capacity/complexity and use of limited training samples. To say that there is \u201csome understanding\u201d is in my opinion a severe understatement.\n\n\n[references]\n[1] Cohen, Taco, Mario Geiger, and Maurice Weiler. \"A General Theory of Equivariant CNNs on Homogeneous Spaces.\" arXiv preprint arXiv:1811.02017 (2018)\n[2] Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. \"Spatial transformer networks.\" Advances in neural information processing systems. 2015.\n[3 ]Cohen, Taco, and Max Welling. \"Group equivariant convolutional networks.\" International conference on machine learning. 2016.\n[4] Kondor, Risi, and Shubhendu Trivedi. \"On the generalization of equivariance and convolution in neural networks to the action of compact groups.\" arXiv preprint arXiv:1802.03690 (2018).\n[5] Bekkers and Lafarge et al. \u201cRoto-Translation Covariant Convolutional Networks for Medical Image Analysis\u201d. In: MICCAI 2018\n[6] Veeling, Bastiaan S., et al. \"Rotation Equivariant CNNs for Digital Pathology.\u201d In: MICCAI 2018\n[7] D. Worrall, S. Garbin, D. Turmukhambetov, and G. Brostow, \u201cHarmonic networks: Deep translation and rotation equivariance,\u201d Preprint arXiv:1612.04642, 2016. 6, 8\n[8] W. Freeman and E. Adelson, \u201cThe design and use of steerable filters,\u201d IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13, no. 9, pp. 891\u2013906, 1991\n", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "-\tAddress an important topic \u2013 could we train the networks by learning the data augmentation policy itself? Can we generate rotationally invariant networks?\n-\tAttempt to generate rotationally invariant networks through a orientation-normalization patch-based layer.\n-\tOutperforms detection on CIFAR-100 and STL-10 \n-\tBetter AUC when predicting tumor growth\n-\tSimilar results when doing segmentation\n", "cons": "-\tThe presented method is a combination of standard convolutions and rotationally invariant layers. It would be interesting to have a unified framework.\n-\tAs the authors acknowledge, their method applies a per-patch rotation correction, while there is a lot of rotation covariance in natural images, thus the need of a hybrid network.\n-\tUnclear if the authors used data augmentation when training the standard networks. One of the key motivations in the work is that using the proposed method one could avoid costly data augmentation techniques. However, such statement is not demonstrated empirically.\n-\tDiscussion incoherent with results \u2013 in the presented results the method does not always outperform classical convolutions. \no\tFor CIFAR-10, results are the same. \no\tFor tumor growth prediction, while having a higher AUC, the accuracy is much lower (how was the threshold selected?). It would be of interest to see the ROCs, since sometimes AUCs can be misleading due to early mistakes. \no\tDICE results on liver lesion segmentation are very similar and likely will not pass any statistical test.\n-\tPresentation: the second paragraph of the introduction seems out of place. Houndred -> hundred\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "0) Summary\n The manuscript proposes a methodology to locally normalize for rotation using image patch moments. The algorithm is applied to five 2d datasets: three public datasets from natural image classification and two (vaguely described) in-house datasets: one derived from volumetric CT data (mCRC) for classification and the other for liver lesion segmentation.\n1) Quality\n The paper uses moment-based local patch normalization, which is a conceptually simple idea.\n2) Clarity\n The technical content is well accessible.\n3) Originality\n The idea of locally normalizing the 2d patches using image moments seems to have not been explored before.\n4) Significance\n The rotation invariance property is important in DNNs. Hence the proposal has a certain value.\n5) Reproducibility\n The method is evaluated on three public datasets using a standard baseline.", "cons": "1) Quality\n A proper comparison in terms of runtime is missing. The performance is only evaluated up to a certain size of the dataset which makes the overall judgement difficult. How do the plots in Figure 2 look for growing #Samples? Also, it is unclear whether the improved performance in some cases is really due to the rotation invariance. Control experiments with randomly rotated images are missing.\n2) Clarity\n There are some typos.\n - Abstract: \"continiously\", \"explorative tasks, realistic\"\n - Intro: \"perceived by human\"\n Certain parts of the paper can be shortened to meet the soft 8 page limit e.g. the discussion of invariance/equivariance and the sometimes vague description e.g. paragraph before section 2.1.\n3) Originality\n The line of research around spatial transformer networks [1] is not discussed.\n4) Significance\n The possible impact of the paper is limited by the fact that the evaluation is done on 2d datasets only and -- more principled -- the approach only provides local rotation invariance which is only a small step towards proper rotation invariance. The results on medical datasets are somewhat inconclusive.\n5) Reproducibility\n Two datasets are not public and the implementation is kept closed which renders the results pretty hard to reproduce.\n\n[1] Jaderberg et al., Spatial Transformer Networks, NIPS 2015, https://papers.nips.cc/paper/5854-spatial-transformer-networks.pdf\n", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["B1ecki-C7V", "ryx9xib0XE", "rylNt71CXV", "HkeuatCpXE"], "comment_cdate": [1548782305769, 1548782322069, 1548772220393, 1548769728094], "comment_tcdate": [1548782305769, 1548782322069, 1548772220393, 1548769728094], "comment_tmdate": [1555946050082, 1555946049821, 1555946049604, 1555946049346], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper111/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper111/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper111/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper111/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Thanks for your feedback and apologize for any misunderstanding", "comment": "Dear reviewer,\n\nFirst of all, we feel sorry that any of our sentences was felt as disrespectful. We explicitely assure that this was not our intension at this or any other time.\n\nWe want to thank you for your broad and comprehensive feedback and would like to address some of your open questions:\n\na) Specifity for rotational invariance:\nWe absolutely agree with you. Our paper focusses on rotational invariance as an example of directly integrated invariance. We also agree that other forms of invariances can (and should) be implemented, while rotational invariance was thought to be a vivid example of how this could be done in a basic, simple and comprehensible way.\n\nb) Spatial transformer networks:\nYou are absolutely right that there should have been a comparison to the spatial transformer networks. We will add such a reference in any future submission.\n\nc) Group convolution neural networks:\n\nWe agree with you that the work on group convolution networks should have been mentioned in our work. However, while the results on Rotation Equivariant CNNs for Digital Pathology had a broad mathematical background, were impressive and showed a clear trend in the extensive comparison to other approaches, their significance remained unclear for us in the paper. In both mentioned publications, the amount of used training data, also being taken from a limited number of original samples, was significantly higher than in our medical scenario, so unfortunately the applicability of the approach to our scenario remained unclear to us. Generally, the theory of group convolution neural networks might be very suitable for the given problem, so we will include it in future submissions.\n\nd) Reduction of network capacity:\nYou are right, that the explicit design for invariance is generally intentional. The misunderstanding, whether an effective model capacity reduction is intentional, might likely be caused by a different definition of effective model capacity, as there is no generally accepted one for this term. Let us assume \"model complexity = variety of functions a model can approximate\". When we assume the fraction of effective (actual) model complexity to the structural (theoretical) model complexity, we can only make use of duplicated filters if we simultaneously reduce the (relative) effective model complexity, as the response to rotated versions of filters comes at the cost of a rotated duplication of the same filters. We agree with you that we should have made this clearer.\n\ne) Rotation of patches:\nWe agree with you on the potential impossibility of determining a patch rotation. For this case we simply use the non-rotated version, which in many cases should have no practical impact, as the output e.g. for empty regions, should be constant for all rotations and in later layers practically never appears. In fact, however, the determination of the moment is neccessary when we want to rotate [...] according to the inverse of [...] this angle. We totally agree, that a citation is missing for the generalization to the n-dimensional case (e.g. [1]). We will definitely add it.\n\n[...]"}, {"title": "(contd.)", "comment": "[...]\n\nf) Notation differences:\nYou are right, the differences in the notation are confusing. We will adapt this.\n\ng) Fully rotated images:\nIn fact the framework is mainly thought to cover local rotations, while these rotations are done per kernel (as assumed by you). This also means that the image is never rotated as a whole. Especially, due to shortcuts - the U-Net architecture would therefore keep spatial relationships. These relationships also exist in deeper layers where they will likely result in blobs of filter responses.\n\nh) Fully rotationally invariant networks:\nWhen using fully rotation invariant layers only, the network will likely lose the ability to detect equivariance to a specific point and give worse performance on large datasets. The introduction of pose maps (as mentioned in 2.2.1) will cover some of that, but due to their underrepresentation it is likely the model will not give the same performance, which is covered by our current experiences. However, the model may still reach performance comparable (for medium sets) or even superior (for small sets) to convolutional networks. This issue, however, is one of the biggest open points in the current approach (2.2.1, 2.3).\n\ni) Discussion\nWe feel very sorry if you or any of your colleagues feel insulted or disrespected by the mentioned or any other sentence within our paper. We assure that we did not want to degrade any of the mentioned work nor do we feel to have any right or intention on grading it in one or the other direction. This is especially the case, as our statement explicitly did not take into account any implementations of rotationaly invariant or equivariant, but was explicitely referencing rotationally non-invariant (i.e. classic) neural networks, and, as it is explicitely emphasized, their ability to develop rotational invariance on their own as well as the understanding the scientific community already has. We specifically did not want to address the developments on rotation invariant networks as presented in the references [2,3,4]. Again, if this should have given you the feeling of any form of disrespect, we apologize for any misunderstanding and invite you to suggest a sentence which would be more appropriate.\n\nIn case of any additional unclarity, I would highly appreciate to help you with any issues.\n\nBest regards,\nAlexander\n\n\n\n[1] Aguilera, A., & P\u00e9rez-Aguila, R. \"General n-dimensional rotations.\" (2004)\n[2] Cohen, Taco, Mario Geiger, and Maurice Weiler. \"A General Theory of Equivariant CNNs on Homogeneous Spaces.\" arXiv preprint arXiv:1811.02017 (2018)\n[3] Jaderberg, Max, Karen Simonyan, and Andrew Zisserman. \"Spatial transformer networks.\" Advances in neural information processing systems. 2015.\n[4] Cohen, Taco, and Max Welling. \"Group equivariant convolutional networks.\" International conference on machine learning. 2016."}, {"title": "Answer to your review", "comment": "Dear reviewer,\n\nwe thank you for your clear, constructive and comprehensive review. We would enjoy to answer some of your open questions:\n\na) Unified framework:\nYou are addressing a significant point here. We totally agree with you that there should be a unified framework, as the current one is a) based on hand-design and b) not able to be adapted within training. We expect our method to give superior results when the degree of invariance can be freely decided as a result of the training process, while the hybrid design can only be seen as an intermediate step. We will address this issue in a currently planned publication on the topic which will be published within the near future.\n\nb) Incoherences within the results:\nIt is correct that the CIFAR-10 results for hybrid rotational invariant networks are not constantly superior to classical convolutions. This is especially true for a higher amount of training data, starting at 8,192 samples, where both approaches perform similar with differences lying below statistical significance. This is concordant to our theory as well as our working hypotheses mentioned in Chapters 2.2.1 and 2.3. Up to this amount, however, the results produced by our approach are highly significantly showing superiority over classical Conv2D networks, which may become clearer when taking into account the depicted confidence intervals. You are totally right that we should have made that point clearer within the discussion. The inconsistencies for the accuracy value of the tumor growth problem are easily explained as being a direct result of the sample distribution, which is highly imbalanced, as (fortunately) response to chemotherapy is significantly more often than non-response (81 positive vs. 511 negative samples). The trained classifiers were only differently biased (hybrid positive, classic negative), while the more balanced (F1, AUC) and completely balanced metrics (Matthews correlation coefficient) clearly indicate the overall superiority. The imbalance of the used dataset (the reason for the low explanatory power of accuracy) is shortly mentioned in the figure subtitle. Some detailed information can be found in the more comprehensive dataset description of the used datasets in the Appendix sections A.1 and A.2. However, we totally agree with you that we should have emphasized this some more within the text.\n\nc) DICE results\nYes, this is absolutely correct. The current segmentation results only show minor improvements over classical convolutions. Due to the restrictions of the available hardware in combination with the very high memory requirements of the current implementation, we had to chose a very low amount of filters per layer which results in a generally poorer goodness of fit. Together with a small amount of available data it was not possible to prove significant superiority. We expect this problem to be solved when either a) the amount of available data becomes larger or b) when switching to a grid deformation based implementation, thus reducing hardware requirements, as planned in the near future.\n\nWe want to thank you again for your review. We will adress the above mentioned points in any resubmission. If you find any additional issues in our paper, we are looking forward to any further feedback. \n\nThank you and best regards,\nAlexander"}, {"title": "Answer to your review", "comment": "Dear reviewer,\n\nthank your for your clear and conclusive review. We agree with the mentioned issues and would like to answer some of your questions:\n\na) Runtime:\nThe runtime of the current implementation is expectedly higher than a comparable CNN, while the factor is highly dependent on the concrete network architecture. This results from the current implementation as an extraction of image patches and their successive rotation, which produces a significant higher memory load and therefore computational effort, but can easily be reduced when switching to a grid deformation based method. Currently we are working on such an implementation which is planned to be published within the near future.\n\nb) Number of samples:\nAll datasets were tested to their full size. The number of samples used for training were the result of the used 4-fold cross-validation splits for training and validation taken from the original training data, which were 50,000/50,000/5,000 samples. However, you're totally right that we should have emphasized this more.\n\nc) Spatial transformer networks:\nWe are very sorry we forgot to mention the relation to Spatial Transformer Networks. We will add a part on this.\n\nd) Local rotational invariance and the problem of generalization to global rotational invariance:\nThis is correct, from the current point of view the network only provides rotational invariance at filter level. This is, however, analog to the standard convolutional approach, where generally global relations can only be represented within deeper network layers, as the convolutional filters are applied locally only. While this might be seen as a disadvantage, it is based on the fundamental structure of convolutional neural networks without being specific to the proposed approach.\n\ne) Reproducibility:\nWe are very sorry that some of our results can not be made publicly available. Due to ethical, regulatory and data security reasons we are not allowed to publish the used medical datasets. We totally agree with you that reproducability is a high value and open science the scientific way of the future. We therefore focussed on the usage of open and publicly available datasets as a proof-of-concept (CIFAR-10/CIFAR-100/STL-10) and used the medical data for showing the applicability for clinical scenarios only. We are, however, looking forward to publishing future datasets. As an additional direct reaction to your feedback, we published a reference implementation of the rotation invariant layers at github: https://github.com/akatzmann/HybridRotationInvariantNetwork .\n\nWe are looking forward for any additional indications of statements needing to be discussed more clearly within our paper. If you have any further questions, don't hesitate to ask.\n\nBest regards,\nAlexander."}], "comment_replyto": ["rkxN6dt2X4", "B1ecki-C7V", "H1x3LN82QN", "Sklp8qhBQN"], "comment_url": ["https://openreview.net/forum?id=BJlVNY8llV¬eId=B1ecki-C7V", "https://openreview.net/forum?id=BJlVNY8llV¬eId=ryx9xib0XE", "https://openreview.net/forum?id=BJlVNY8llV¬eId=rylNt71CXV", "https://openreview.net/forum?id=BJlVNY8llV¬eId=HkeuatCpXE"], "meta_review_cdate": 1551356609683, "meta_review_tcdate": 1551356609683, "meta_review_tmdate": 1551703154096, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "This paper introduces rotationally invariant convolutional filters, which are better able to learn from limited training data. The reviewers raised a number of concerns with this paper, including missing citations of key work, the lack of clarity in the modeling section, and the overly optimistic discussion in light of the modest performance. The authors provided an equally detailed point-by-point response; however, most of these comments were apologies for their own omissions and promises to add the information in the camera-ready submission. \n\nOverall, I believe that the discussion reinforced the fact that, while the rotationally invariant convolutions are an interesting idea, the paper needs some careful refinement before publication. Therefore, I would agree with the two reviewers who advocated rejection to MIDL. \n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BJlVNY8llV¬eId=ryx56fUSUN"], "decision": "Reject"}