AMSR / conferences_raw /neuroai19 /neuroai19_Hye5NQYU8r.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
10.7 kB
{"forum": "Hye5NQYU8r", "submission_url": "https://openreview.net/forum?id=Hye5NQYU8r", "submission_content": {"TL;DR": "We investigated if simple deep networks possess grid cell-like artificial neurons while memory retrieval in the learned concept space.", "keywords": ["concept space", "cognitive map", "place cells", "grid cells", "memory retrieval"], "authors": ["Anonymous"], "title": "Do deep neural networks possess concept space grid cells?", "abstract": "Place and grid-cells are known to aid navigation in animals and humans. Together with concept cells, they allow humans to form an internal representation of the external world, namely the concept space. We investigate the presence of such a space in deep neural networks by plotting the activation profile of its hidden layer neurons. Although place cell and concept-cell like properties are found, grid-cell like firing patterns are absent thereby indicating a lack of path integration or feature transformation functionality in trained networks. Overall, we present a plausible inadequacy in current deep learning practices that restrict deep networks from performing analogical reasoning and memory retrieval tasks.", "authorids": ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper51/Authors"], "pdf": "/pdf/c1985e18e0560577503f0a99a306f3b5ccadc5bb.pdf", "paperhash": "anonymous|do_deep_neural_networks_possess_concept_space_grid_cells"}, "submission_cdate": 1568211761717, "submission_tcdate": 1568211761717, "submission_tmdate": 1570097888244, "submission_ddate": null, "review_id": ["HJxZK10PDH", "Hyews2GYvr", "Hyx-2uTqPS"], "review_url": ["https://openreview.net/forum?id=Hye5NQYU8r&noteId=HJxZK10PDH", "https://openreview.net/forum?id=Hye5NQYU8r&noteId=Hyews2GYvr", "https://openreview.net/forum?id=Hye5NQYU8r&noteId=Hyx-2uTqPS"], "review_cdate": [1569345401084, 1569430686927, 1569540265271], "review_tcdate": [1569345401084, 1569430686927, 1569540265271], "review_tmdate": [1570047557598, 1570047554784, 1570047541300], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper51/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper51/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper51/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Hye5NQYU8r", "Hye5NQYU8r", "Hye5NQYU8r"], "review_content": [{"evaluation": "2: Poor", "intersection": "4: High", "importance_comment": "While the question of how neural networks may act over concept space is important, I don\u2019t think the approach used by the authors correctly adress this question. The work of Hill et al. (2019) very clearly addresses these questions by devising tasks that require generalization across domains, showing how training regime is sufficient to overcome the difficulties of these tasks, even in shallow networks. I don\u2019t see how the current work adds more clarity to this research direction.", "clarity": "3: Average readability", "technical_rigor": "1: Not convincing", "intersection_comment": "The question of how the brain and artificial network can perform relational reasoning is critical in both fields, since many believe that it may be one of the primary ingredients of intelligence. It\u2019s also critical to understanding the function of the hippocampus and entorhinal cortex in humans.", "rigor_comment": "The main point relies purely on a visual representation of the top PCs of the penultimate layer of a CNN, which I believe is insufficient. The authors should have identified a task where networks trained on MNIST perform poorly, and then propose a different strategy or architecture.", "importance": "2: Marginally important", "title": "Neural networks trained on pure MNIST classification don\u2019t appear to show grid representations", "category": "Common question to both AI & Neuro", "clarity_comment": "Overall the writing is relatively clear, but it would have been beneficial to describe the hypotheses more explicitly, e.g. what neural activity would be expected for a place, grid, or concept representation with respect to MNIST."}, {"evaluation": "2: Poor", "intersection": "4: High", "importance_comment": "I do think that investigating under what conditions in artificial networks grid cells appear is very interesting. However, I was not fully convinced that the results presented here made substantial contributions to the AI or neuroscience field. ", "clarity": "3: Average readability", "technical_rigor": "2: Marginally convincing", "intersection_comment": "The authors investigated neuroscience-informed properties of DNN. I think that searching for neuroscience-inspired properties in deep networks can be interesting, and is certainly within the intersection of AI and neuroscience. ", "rigor_comment": "While the specific techniques employed by the authors seem perfectly fine and relatively rigorous, the question itself (do hidden units in the later layers contain grid-like patterns) felt rather simple and uninformative (simple is great, as long as it still tells you something interesting). ", "comment": "- I am not convinced that the ability of deep networks to solve analogical problems relies on the presence of grid-like properties in the hidden units. Perhaps this is ignorant of me, but I think that this is a critical point for the paper and thus needs to be better motivated and explained. In particular, I think that while grid cells can support path-integration, not all networks that path-integrate necessarily contain cells with hexagonal symmetry. \n- I am not surprised that the network trained by the authors does not show grid-like responses. It seems reasonable that the network learned to classify each number separately, without learning the full manifold. If someone were to record from a real brain from the visual areas while the animal was performing a discrete visual classification task, I am not sure that they would see grid cells there either. Thus, unfortunately the current paper reads that the authors trained a network that didn't need to learn the full manifold, so it didn't, and then didn't show properties that one may (or may not) expect for it to exhibit if it had learned the full manifold. I think there could be something interesting in this endeavor, but the implementation carried out by this paper wasn't very convincing to me. \n- A minor (but important) comment - grid cells have not yet \"known to support path integration in rodents and humans\", since there is no casual experiment that shows their necessity. I think this statement is also indicative of my general complaint - that the importance of grid cells is not fully fleshed out or supported in this work, and thus the lack of grid cells in networks is over-interpreted.\n", "importance": "2: Marginally important", "title": "Potentially interesting idea, but over-interpreted results", "category": "AI->Neuro", "clarity_comment": "The paper was overall relatively well-written and easy to follow. The figures were simple and easy to interpret. The authors made their methods, results, and claims quite clear. "}, {"title": "Interesting question, but significant problems with underlying assumptions and analysis methods", "importance": "2: Marginally important", "importance_comment": "While the general method of training neural networks and examining their representations for key insights and relations to neuroscience is a valid one, the methods and results do not seem to answer it in a way that is principled or well thought out. There is also substantial misunderstanding about neuroscience concepts throughout the paper.\n", "rigor_comment": "There are major logic / misrepresentation issues throughout the paper, some based on incorrect assumptions and possible misunderstandings. The neuroscience motivation lacks consensus in the community and some concepts are wrongly characterized, such as the definition of path integration. This greatly weakens the motivation and connections to deep learning. \n\nThe main method is a simple analysis of vanilla CNN trained on MNIST classification. The results are fairly unconvincing and not substantially justified in the approach. For example, the justification that this won\u2019t allow the network to navigate in concept space seems arbitrary. Finally, it implies that classification is the right task to train the network to perform tasks based on relational reasoning. What about unsupervised tasks, and training with other models such as VAEs?", "clarity_comment": "Design choice and analysis are poorly justified, with some ambiguity about how the results were obtained. It\u2019s also unclear why the architecture, optimizers, dataset were chosen. Certain methods were also not justified - why limit the analysis to last two layers? Also why is figure 1 focused on the output layer, which are trained to represent the classes?\nNot clear on why the plot \u2018shows place-cell like activity\u2019. what is the justification there? tSNE embedding space cannot be interpreted as it\u2019s not linear. Similar problems apply to the PCA analysis. Grid cells represent cells that fire in response to various locations, not separate cells that fire to resemble some geometric pattern within the brain.", "clarity": "2: Can get the general idea", "evaluation": "1: Very poor", "intersection_comment": "Aside from the obvious take, the intersection is somewhat strained. Unclear what the ultimate goal of the paper is - as the idea appears to be based on a flawed understanding of mechanistically what\u2019s required in the brain to solve certain problems.\n", "intersection": "3: Medium", "comment": "The authors propose training a deep network and seeing if activations similar to concept and grid cells exist in the hidden layers. The motivation, however, is strenuous, and the results are not convincing. It is unclear why these cells are necessary and sufficient to do relational reasoning as the authors claim, and the PCA / tSNE don't seem to address the question the authors sought to address.\n", "technical_rigor": "1: Not convincing", "category": "Neuro->AI"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Reject"}