{"forum": "SyxTQ7K88S", "submission_url": "https://openreview.net/forum?id=SyxTQ7K88S", "submission_content": {"TL;DR": "Bio-inspired artificial neural networks, consisting of neurons positioned in a two-dimensional space, are capable of forming independent groups for performing different tasks.", "keywords": ["deep learning", "neuroscience", "multi-task learning"], "authors": ["Maciej Wo\u0142czyk", "Jacek Tabor", "Marek \u015amieja", "Szymon Maszke"], "title": "Biologically-Inspired Spatial Neural Networks", "abstract": "We introduce bio-inspired artificial neural networks consisting of neurons that are additionally characterized by spatial positions. To simulate properties of biological systems we add the costs penalizing long connections and the proximity of neurons in a two-dimensional space. Our experiments show that in the case where the network performs two different tasks, the neurons naturally split into clusters, where each cluster is responsible for processing a different task. This behavior not only corresponds to the biological systems, but also allows for further insight into interpretability or continual learning.", "authorids": ["maciej.wolczyk@gmail.com", "jacek.tabor@uj.edu.pl", "marek.smieja@uj.edu.pl", "szymon.maszke@gmail.com"], "pdf": "/pdf/3a906f5026965b9d0ea6c219d237878d6574618f.pdf", "paperhash": "woczyk|biologicallyinspired_spatial_neural_networks"}, "submission_cdate": 1568211749322, "submission_tcdate": 1568211749322, "submission_tmdate": 1572559928313, "submission_ddate": null, "review_id": ["SylJxr6cvH", "HyllUp6cwH", "rklw-_-swr"], "review_url": ["https://openreview.net/forum?id=SyxTQ7K88S¬eId=SylJxr6cvH", "https://openreview.net/forum?id=SyxTQ7K88S¬eId=HyllUp6cwH", "https://openreview.net/forum?id=SyxTQ7K88S¬eId=rklw-_-swr"], "review_cdate": [1569539302793, 1569541448118, 1569556479160], "review_tcdate": [1569539302793, 1569541448118, 1569556479160], "review_tmdate": [1570047541983, 1570047540381, 1570047538096], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper21/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper21/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper21/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["SyxTQ7K88S", "SyxTQ7K88S", "SyxTQ7K88S"], "review_content": [{"title": "A simple network model that forms spatially clustered regions like the brain", "importance": "3: Important", "importance_comment": "The authors shed light on how/why spatially distinct regions with different computational roles form in the brain. The authors show the interesting although not too surprising result that penalizing long-range connections results in networks that are functionally and to some extent topologically compartmentalized into spatially separated subgroups of neurons. The impact is diminished by the work lacking a roadmap to further inquiry.", "rigor_comment": "The mechanisms they use (primarily the l1 penalty on neural distances) are clearly described, as is the method for splitting the network into two subnetworks. Comparisons with other mechanisms (such as an l2 penalty on neural distances) would have been helpful, and it would have been a worthwhile endeavor to show that their model is in some sense the most natural or minimal model that generates the desired phenomena.\n\nThe evidence provided by the authors that these subgroups are spatially isolated from each other is visual. Quantitative measures would have made the point more convincing.\n\nThe authors address the issue of input encoding in a direct, thorough, and convincing way.\n\nThere may be a better way to assign class labels to hidden neurons than the greedy algorithm they propose. One potential issue that I see is that a given neuron in layer l may have strong connections to two neurons in layer l+1 that themselves are assigned to the same class, but the outputs of these two neurons may cancel out in layer l+2. It might be worth looking at the change of the loss with respect to changes in the hidden layer neurons in order to assign the labels. The authors may have already tried this approach -- I couldn't really tell from the footnote they wrote on the matter.", "clarity_comment": "The technical aspects of the paper and the basic reasoning behind them are clear. However, I feel that the motivation behind the work and some of the technical choices that were made could have been clarified somewhat. More discussion as to the \"interpretation\" of their subgraph decomposition method, and comparisons with other possible ways to do this, would have been helpful.\n\nSince the results are largely what one would expect to see, it would have given the work's purpose more clarity if they had suggested future steps that could push the work further, or provided some sort of ultimate \"end goal\" for this line of inquiry.", "clarity": "4: Well-written", "evaluation": "3: Good", "intersection_comment": "The paper as written is concerned with using artificial neural networks to help explain biological ones, without a clear path to closing the loop by going in the other direction. As such, doesn't seem likely to be very interesting to an AI researcher as written.", "intersection": "3: Medium", "comment": "This paper seems like a first, most basic \"sanity check\" that could be done to try to explain in models why the brain forms computationally distinct regions that are also separated in space. While ultimately they will want to take on the more ambitious goal of making the case that their proposed mechanism is truly the primary reason this happens, the scope of the work seems appropriate for a workshop.\n\nI think the work would have benefitted from a measure of how close the connections decompose into non-overlapping subgraphs, without taking into consideration the labels, and to show that these \"anatomical\" subgraphs are the same found through their strategy used to find \"functional\" subgraphs via backpropagating label assignments to the neurons. This could help answer the question of if the connectome is sufficient information for defining regions in the brain. As stated above, more discussion as to the \"interpretation\" of their subgraph decomposition method, and comparisons with other possible ways to do this, would have been helpful.\n\nSome discussion of recurrent connections in a paper meant to model the brain would have been beneficial.", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}, {"title": "Interesting idea and results, and perhaps raises more questions in the process", "importance": "3: Important", "importance_comment": "This is an interesting paper and perhaps generates more questions than it answers, such as connection with other local learning rules. The question being asked is certainly interesting and relevant.", "rigor_comment": "The technicalities are straightforward and justified. It would, however, be interesting to see more analyses on properties during training, including convergence compared to benchmark without the additional spatial losses. In addition, how robust is this finding relative to network architecture?", "clarity_comment": "This paper is well written. The concepts, equations, and connections are emphasized and generally understood. Figures are also simple and clearly understandable.", "clarity": "4: Well-written", "evaluation": "3: Good", "intersection_comment": "The authors use an MLP trained through backprop with penalty constraints to see if the learned network can be split to solve separate tasks. The biological connection is with constrained learning in the brain based on spatial constraints. The results are interesting, although it\u2019s not immediately clear what the contribution are for neuroscience / ML. ", "intersection": "4: High", "comment": "This is an interesting paper that attempts to analyze learned neural representations when neurons have additional spatial properties which determine their connection lengths. The network is then trained with a loss function that accounts for the strengths of the connections and penalizes large distances. This forces the resulting neurons within one layer to cluster together during a two-task classification.", "technical_rigor": "4: Very convincing", "category": "Neuro->AI"}, {"title": "Good first step in studying the effects of wiring constraints on modularity of a network trained on multiple tasks - quite preliminary", "importance": "3: Important", "importance_comment": "By introducing a cost on wiring strength and length to a fully-connected feedforward neural network, the authors show that this network trained on two tasks simultaneously (MNIST and Fashion-MNIST) splits into two modular and spatially segregated networks. This result is expected and quite preliminary but nonetheless interesting and relevant to the workshop.", "rigor_comment": "Rigorous", "clarity_comment": "Clearly presented.\n\nPlease connect your work to this relevant work:\nhttps://ieeexplore.ieee.org/document/6793887\nhttps://royalsocietypublishing.org/doi/full/10.1098/rspb.2012.2863\nhttps://www.cell.com/neuron/fulltext/S0896-6273(18)30250-2\nhttps://www.nature.com/articles/s41593-018-0310-2\nhttps://arxiv.org/abs/1909.09847 (just came out)\n\nMinor: in the cost function, I believe \"alphaT(l)+ L(l)\" should be \"alphaT(l)+ V(l)\".\n\n", "clarity": "4: Well-written", "evaluation": "3: Good", "intersection_comment": "This work could be relevant both to understand biological neural networks as well as build more efficient artificial neural networks.", "intersection": "4: High", "comment": "Suggestions for next steps:\nIs the modularity an effect of spatial constraint or weight strength constraint? Try both indpendently.\nWhat if the network has limited capacity (fewer neurons per layer)? Does it share more neurons between tasks in this case?", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}