File size: 10,215 Bytes
fad35ef
1
{"forum": "BJg6EmYL8B", "submission_url": "https://openreview.net/forum?id=BJg6EmYL8B", "submission_content": {"TL;DR": "Initial findings in the intersection of network neuroscience and deep learning. C. Elegans and a mouse visual cortex learn to recognize handwritten digits.", "keywords": ["Network Neuroscience", "neurons", "brain", "visual cortex", "Deep Learning", "mouse visual cortex", "C. Elegans"], "authors": ["Nicholas Roberts", "Dian Ang Yap", "Vinay Uday Prabhu"], "title": "Deep Connectomics Networks: Neural Network Architectures Inspired by Neuronal Networks", "abstract": "The interplay between inter-neuronal network topology and cognition has been studied deeply by connectomics researchers and network scientists, which is crucial towards understanding the remarkable efficacy of biological neural networks. Curiously, the deep learning revolution that revived neural networks has not paid much attention to topological aspects. The architectures of deep neural networks (DNNs) do not resemble their biological counterparts in the topological sense. We bridge this gap by presenting initial results of Deep Connectomics Networks (DCNs) as DNNs with topologies inspired by real-world neuronal networks. We show high classification accuracy obtained by DCNs whose architecture was inspired by the biological neuronal networks of C. Elegans and the mouse visual cortex.", "authorids": ["nick11roberts@cmu.edu", "dianang7@stanford.edu", "vinay@unify.id"], "pdf": "/pdf/19838b8f7ee9341806c76b24f5edcd8af9f5754d.pdf", "paperhash": "roberts|deep_connectomics_networks_neural_network_architectures_inspired_by_neuronal_networks"}, "submission_cdate": 1568211764592, "submission_tcdate": 1568211764592, "submission_tmdate": 1572469899014, "submission_ddate": null, "review_id": ["ryxdpYHWPr", "Sylh9G2KvH", "rkllsKuqvS"], "review_url": ["https://openreview.net/forum?id=BJg6EmYL8B&noteId=ryxdpYHWPr", "https://openreview.net/forum?id=BJg6EmYL8B&noteId=Sylh9G2KvH", "https://openreview.net/forum?id=BJg6EmYL8B&noteId=rkllsKuqvS"], "review_cdate": [1568917951757, 1569469075692, 1569520024021], "review_tcdate": [1568917951757, 1569469075692, 1569520024021], "review_tmdate": [1570047569942, 1570047551090, 1570047545984], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper58/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper58/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper58/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BJg6EmYL8B", "BJg6EmYL8B", "BJg6EmYL8B"], "review_content": [{"rigor_comment": "When claiming that a technique results in better performance on a task, the baseline network tested is obviously very important. The baseline model is described as containing one conv block and a fully connected layer. It seems that this baseline has far fewer processing stages and parameters than the models with DAGs. And the DAG models have differing numbers of units amongst themselves. Comparing to \"frozen\" (ie untrained) DAGs does not control for the benefit of having these extra nodes as even random weights can still perform well on simple tasks. Relatedly, the use of MNIST for the main comparison metric is a poor choice because the baseline model already performs so well that marginal increases are hard to interpret here. It seems that fashion mnist is a harder task at least and should have been used to compare models. ", "evaluation": "3: Good", "clarity": "3: Average readability", "title": "Good concept not great execution", "category": "Neuro->AI", "importance_comment": "Neuroscience certainly has seen an explosion in studies looking at network topology and applying the tools of graph theory/network science. Given how easily artificial neural networks can be mapped onto graph structures it seems very natural to combine the two. It is also a straightforward way to bring in biological data, potentially at exactly the right level of detail/abstraction. The particular results in this paper, however, do not reflect a particularly strong instantiation of this concept", "importance": "4: Very important", "intersection": "4: High", "clarity_comment": "The introduction was well written however details were lacking in the methods and results. For example: \"DAG\" was never actually defined. Why was the c elegans network the only one tested on other datasets and not tested while frozen? Why does the validation accuracy start so high in Fig 2?", "intersection_comment": "Bringing neuroanatomical data directly into deep nets is challenging and while many assumptions and simplifications were made in order to do it in this study, it is still an admirable attempt at combining neuroscience and AI. ", "technical_rigor": "2: Marginally convincing"}, {"title": "Not the best use of available data", "importance": "2: Marginally important", "importance_comment": "The study attempts to use the wiring statistics of real brains to build neural networks. While it is an interesting approach, the choice of task and the model assumptions are not well suited to the topic. The performance improvements are also not very convincing, and it's unclear if we should expect these results to be generalizable.", "rigor_comment": "The choice of network connectivity is poor. The authors use undirected networks and randomly convert them to directed networks, but connectome data with directed weights are readily available in a multitude of organisms, including C. elegans and mouse. The results are not convincing, with MNIST performance at above 97% in all cases. Why are results for C. elegans not shown in Table 2? \n\nAdditionally, the issue of number of trainable parameters is not explored. Freezing the weights is not sufficient -- more frozen parameters could still account for the performance benefits compared to the baseline.\n", "clarity_comment": "The issue of number of trainable parameters is not sufficiently explored. I would have like to see a better exploration of how the results depend on this quantity and how things change with different assumptions about learned and unlearned connections.\n", "clarity": "3: Average readability", "evaluation": "2: Poor", "intersection_comment": "It's hard to believe that the C. elegans connectome would be optimized for MNIST in any way. Also, ignored directedness in the datasets is an unnecessary omission. More work could have been done to bring the models closer to the biology.", "intersection": "3: Medium", "technical_rigor": "2: Marginally convincing", "category": "AI->Neuro"}, {"title": "Good concept, bad execution", "importance": "2: Marginally important", "importance_comment": "The connections between network topology and function in both neuroscience and AI research are very interesting. The pursuit of research at this intersection is highly important.\n\nThis paper does fall into that category work, but the methods and results presented therein do not add up to an important contribution to the area.", "rigor_comment": "The paper goes to some length to motivate the research it presents, providing a brief survey of the development of network neuroscience that cites the connections between several prominent publications underlying that development. The technical aspects of their own work are detailed less satisfactorily.\n\nThe structure of the networks is presented in citation, but not actually detailed in any measurable way. Their method of constructing the networks is described in text reasonably well, but the diagrams presented (e.g. Fig 1) are not detailed nearly enough. It is not clear how the networks differ. Metrics are presented to describe the modular structure of the borrowed network subunits, but their connections to the desired topological results are not made clear.\n\nTheir results are also speciously presented. Four of the presented models start - without any prior training on the MNIST task - performed at >97% accuracy. Moreover, the results presented are explicitly labeled as validation performances. The loss patterns are also ill-detailed; increases in validation loss are not described and mesh strangely with the presented classification results.", "clarity_comment": "The research is well-motivated, but the actual project pursued is not. The structure of the network models adopted and used is not clearly communicated to the reader (see technical rigor section) and the figures are lacking in detail. For example - half of the line plots in figure 2 (subfigures not individually labeled) are not described in either the text, the figure legend or the figure caption.\n\nThere is a clear message conveyed through this work, but it doesn't answer the questions presented in the ostensible thesis of the paper: how does network topology influence computation. They've shown that they can get high classification results on a particular sort of network architecture, but don't explore how the defining aspects of those topologies influence the results presented. The overall intent of the work is unclear for that reason", "clarity": "2: Can get the general idea", "evaluation": "2: Poor", "intersection_comment": "Ideally, this would be highly intersectional; however, the lack of execution toward the stated intent of the paper do not follow through to actually fulfill that intersection. Understanding the role of network topology in network computation is important, but I think that the work presented here is less so.", "intersection": "3: Medium", "technical_rigor": "2: Marginally convincing", "category": "Neuro->AI"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}