AMSR / conferences_raw /neuroai19 /neuroai19_Syx377Y8IH.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
11.3 kB
{"forum": "Syx377Y8IH", "submission_url": "https://openreview.net/forum?id=Syx377Y8IH", "submission_content": {"TL;DR": "A layer modelling local random connectomes in the cortex within deep networks capable of learning general non-parametric invariances from the data itself.", "keywords": [], "pdf": "/pdf/19fac9518beb7ccac1657ce4336c82e4d56aac10.pdf", "authors": ["Anonymous"], "title": "Learning Non-Parametric Invariances from Data with Permanent Random Connectomes ", "abstract": "One of the fundamental problems in supervised classification and in machine learning in general, is the modelling of non-parametric invariances that exist in data. Most prior art has focused on enforcing priors in the form of invariances to parametric nuisance transformations that are expected to be present in data. However, learning non-parametric invariances directly from data remains an important open problem. In this paper, we introduce a new architectural layer for convolutional networks which is capable of learning general invariances from data itself. This layer can learn invariance to non-parametric transformations and interestingly, motivates and incorporates permanent random connectomes there by being called Permanent Random Connectome Non-Parametric Transformation Networks (PRC-NPTN). PRC-NPTN networks are initialized with random connections (not just weights) which are a small subset of the connections in a fully connected convolution layer. Importantly, these connections in PRC-NPTNs once initialized remain permanent throughout training and testing. Random connectomes makes these architectures loosely more biologically plausible than many other mainstream network architectures which require highly ordered structures. We motivate randomly initialized connections as a simple method to learn invariance from data itself while invoking invariance towards multiple nuisance transformations simultaneously. We find that these randomly initialized permanent connections have positive effects on generalization, outperform much larger ConvNet baselines and the recently proposed Non-Parametric Transformation Network (NPTN) on benchmarks that enforce learning invariances from the data itself.", "authorids": ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper18/Authors"], "paperhash": "anonymous|learning_nonparametric_invariances_from_data_with_permanent_random_connectomes"}, "submission_cdate": 1568211748100, "submission_tcdate": 1568211748100, "submission_tmdate": 1570097889800, "submission_ddate": null, "review_id": ["H1gMTNnKPH", "rklVwcaqvH"], "review_url": ["https://openreview.net/forum?id=Syx377Y8IH&noteId=H1gMTNnKPH", "https://openreview.net/forum?id=Syx377Y8IH&noteId=rklVwcaqvH"], "review_cdate": [1569469626059, 1569540700387], "review_tcdate": [1569469626059, 1569540700387], "review_tmdate": [1570047550478, 1570047540866], "review_readers": [["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper18/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper18/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Syx377Y8IH", "Syx377Y8IH"], "review_content": [{"title": "Poorly written and of unclear importance", "importance": "2: Marginally important", "importance_comment": "The paper focuses on the topic of learning non-parametric invariances using randomly wired networks. A network architecture is proposed that extends previous approaches and improves performance on an MNIST dataset with various transformations applied to it. The results are rather preliminary and their importance is difficult to assess due to the poor presentation of the paper.\n", "rigor_comment": "The results appear to provide an improvement over the previous NPTN work. However, because only one benchmark is used, it is difficult to assess the generality of the results.\n", "clarity_comment": "The paper is written in a dense and difficult to follow style. Part of this is due to the heavy reliance on previous NPTN literature. But it is also due to use of jargon and poorly defined parameters. Examples include G (if it is a number what is |G| needed for?) and CMP. The authors should strive to provide an intuitive description of their results. The diagram in figure 1 does not do a good job of describing the architecture. No general discussion of the results is provided.", "clarity": "1: Unreadable", "evaluation": "2: Poor", "intersection_comment": "The connection to neuroscience is quite loose, as the authors acknowledge. The authors speculate that local random connectivity is present in the brain, but beyond that little discussion of the biological relevance of the results is made.", "intersection": "2: Low", "technical_rigor": "2: Marginally convincing", "category": "AI->Neuro"}, {"title": "Rough writing somewhat obscures what is likely an important and interesting method for forming invariant representations", "importance": "4: Very important", "importance_comment": "The learning of invariances is a key problem in both machine and biological intelligence, and any progress made to understand it is of high importance. While the less-than-perfect clarity of the work makes it a little harder to ascertain the authors' success at making progress on this problem, it seems to me as though it is a solid step in the right direction. I might rate this as a 4.5 if I could.", "rigor_comment": "When it comes to building invariances, an important issue is being able to learn with fewer training examples than an architecture that doesn't have as much invariance-building capability. Here the authors present \"errors\" of their trained models without elaborating much further. It would have been helpful if the authors had discussed the training more (such as if they train until the error no longer decreases) and maybe shown the accuracy through training.\n\nThere are other details that aren't explained. For instance, an important aspect of the model are fixed random connections, but it isn't discussed if these connections are weighted or not. These missing details don't seem to be essential to me.", "clarity_comment": "Overall the paper suffers from messy organization, difficult-to-digest expositions about the differences of at least four closely related models, some missing details, and some lack of motivation and intuition. Some of this is understandable due to the intrinsically complex nature of the work, but it seems that with more time and polish the paper could be improved a great deal (and still fit in four pages).\n\nBelow are some specific examples of clarity issues.\n\nIt would be helpful if the \"random unstructured local connections\" as seen in cortex were defined more precisely. Do these connections not change as the animal learns tasks while other connections do change? Do these connections map together different \"filters\", as in the authors' proposed model?\n\nThere is a bundle of small issues with the writing. For instance, the acronym PRC is defined in the abstract but not in the main text. In Table 1, the labels in the caption are missing in the table itself. The label for the PRC-NPTN networks is different in the rotation table vs the pixel translation table.\n\nThe organization of the paper gets in the way of its clarity. For instance, Transformation Networks are introduced in Section 1, but it isn't until Section 2 that the underlying theory is referenced (reference [1]). As far as I can tell Transformation Networks are a direct application of this theory to deep neural networks. This connection isn't made as explicit as it could have been.\n\nThe architecture could have been made clearer if Figure 1 had shown an example of a (Non-Parametric) Transformation Network layer, as well as a standard convolution layer with max pooling, to compare with the Permanent Random Connectome Non-Parametric Transformation Network.\n\nWhile some intuition is provided for why random connections are advantageous over the standard Non-Parametric Transformation Network layers, a more thorough discussion of this important point would have been very helpful. Why is it helpful to max pool across different filters?", "clarity": "2: Can get the general idea", "evaluation": "4: Very good", "intersection_comment": "While the authors don't do very much to explain the connection to biology and confess that this isn't a strong motivator for them, I believe that the connection is actually fairly strong. Success in their models suggests roles for random connections in the brain. Their results suggest potential improvements to state-of-the-art performance in artificial neural networks, since convolutional layers in very deep architectures could conceivably be swapped out by the layers proposed here. As such, the results are interesting both to the neuroscientist as well as the AI researcher.\n\nI feel that putting more effort into making the connection to biology could easily increase this score by a point.", "intersection": "4: High", "comment": "The exposition needs to be cleaned up. Figure 1 in particular needs to be expanded to include more models and more details. The authors should consider keeping only one of the organization trees in Figure 1 since the two feel redundant, or find a way to combine them.\n\nThe buildup from the theory, to Transformation Networks, then to Non-Parametric Transformation Networks, then to Permanent Random Connectome Non-Parametric Transformation Networks, and finally to comparisons with convolution neural networks, should probably have happened in a more streamlined, linear way.\n\nRegarding the biological motivation for the fixed connections, this point could be strengthened somewhat by describing how the max pooling could be implemented by biology. The theory (as developed in [1]) seems to hold for averaging as well as max pooling, which may be more biologically feasible. In general I think the authors don't give themselves enough credit with the connection to biology (where the computationally beneficial aspects of random connections is already being discussed, i.e. for dimensionality expansion), and they could have laid out the connections more clearly.\n\nAnd finally, making the point that random fixed connections are important/useful, beyond showing simulation results, would strengthen the work considerably.\n\nThe paper does a less-than-stellar job of making the case that a presentation of the work at the workshop would leave attendees with a basic understanding. That said, the paper has a great deal of potential, and could contribute significantly to both fields if clarity issues are resolved.", "technical_rigor": "4: Very convincing", "category": "Common question to both AI & Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Reject"}