AMSR / conferences_raw /neuroai19 /neuroai19_ByMLEXFIUS.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
12.9 kB
{"forum": "ByMLEXFIUS", "submission_url": "https://openreview.net/forum?id=ByMLEXFIUS", "submission_content": {"keywords": [], "authors": ["Hidenori Tanaka", "Aran Nayebi", "Niru Maheswaranathan", "Lane McIntosh", "Stephen A. Baccus", "Surya Ganguli"], "title": "Revealing computational mechanisms of retinal prediction via model reduction", "abstract": "Recently, deep feedforward neural networks have achieved considerable success in modeling biological sensory processing, in terms of reproducing the input-output map of sensory neurons. However, such models raise profound questions about the very nature of explanation in neuroscience. Are we simply replacing one complex system (a biological circuit) with another (a deep network), without understanding either? Moreover, beyond neural representations, are the deep network's {\\it computational mechanisms} for generating neural responses the same as those in the brain? Without a systematic approach to extracting and understanding computational mechanisms from deep neural network models, it can be difficult both to assess the degree of utility of deep learning approaches in neuroscience, and to extract experimentally testable hypotheses from deep networks. We develop such a systematic approach by combining dimensionality reduction and modern attribution methods for determining the relative importance of interneurons for specific visual computations. We apply this approach to deep network models of the retina, revealing a conceptual understanding of how the retina acts as a predictive feature extractor that signals deviations from expectations for diverse spatiotemporal stimuli. For each stimulus, our extracted computational mechanisms are consistent with prior scientific literature, and in one case yields a new mechanistic hypothesis. Thus overall, this work not only yields insights into the computational mechanisms underlying the striking predictive capabilities of the retina, but also places the framework of deep networks as neuroscientific models on firmer theoretical foundations, by providing a new roadmap to go beyond comparing neural representations to extracting and understand computational mechanisms.", "authorids": ["tanaka8@stanford.edu", "anayebi@stanford.edu", "nirum@google.com", "lanemcintosh@gmail.com", "baccus@stanford.edu", "sganguli@stanford.edu"], "pdf": "/pdf/5677e102e74d8908fb1a8d2b7e0a650fff23aa24.pdf", "paperhash": "tanaka|revealing_computational_mechanisms_of_retinal_prediction_via_model_reduction"}, "submission_cdate": 1568211758412, "submission_tcdate": 1568211758412, "submission_tmdate": 1572558373117, "submission_ddate": null, "review_id": ["SyeZQ4O9Dr", "HkxqR3n5wr", "SkgAmOtiwH"], "review_url": ["https://openreview.net/forum?id=ByMLEXFIUS&noteId=SyeZQ4O9Dr", "https://openreview.net/forum?id=ByMLEXFIUS&noteId=HkxqR3n5wr", "https://openreview.net/forum?id=ByMLEXFIUS&noteId=SkgAmOtiwH"], "review_cdate": [1569518616695, 1569537234155, 1569589286033], "review_tcdate": [1569518616695, 1569537234155, 1569589286033], "review_tmdate": [1570047546218, 1570047542914, 1570047534251], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper43/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper43/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper43/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["ByMLEXFIUS", "ByMLEXFIUS", "ByMLEXFIUS"], "review_content": [{"title": "Potentially exciting and widely applicable method; paper could do with more information and less salesmanship", "importance": "4: Very important", "importance_comment": "-- Development of new methods to distill DCNN computational strategies is crucial. The present method is interesting and appears to generate testable novel hypotheses.", "rigor_comment": "-- Appears rigorous. I leave it to reviewers with more expertise in pre-existing model reduction techniques to evaluate the novelty.", "clarity_comment": "-- Text is written articulately, figures are exceptionally rich.\n\n-- Although the CNN used appears to be that from Maheswaranathan et al. (2018), this would be a more self-contained submission if it included some of the details about that model and its training data. For example, there is nothing in the present submission which even specifies which species' retina the CNN is a model of (aside from a non-sequitur mention of salamanders in the text, and an odd-looking creature in Figure 3). Nor is there any detail about the image stimuli to which neural responses were recorded, even though the particular spatiotemporal structure of the stimuli seems to be critical to the model reduction method. ", "clarity": "4: Well-written", "evaluation": "5: Excellent", "intersection_comment": "-- Development of model summarisation and interpretation methods is of critical importance to making AI-based systems useful models in neuroscience. ", "intersection": "5: Outstanding", "comment": "-- A great deal of the text, in both the abstract and manuscript, is given over to bombastic excitement about the power and virtues of the method. I understand the desire to communicate the potential of a new method, but this would be a more informative, self-contained piece of work if more space and attention were dedicated to describing the methods involved in the proof-of-principle study reported. ", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}, {"title": "Potentially significant results for neural network model reduction moderately obscured by unclear writing", "importance": "4: Very important", "importance_comment": "This work advances previous work that introduced a three layer convolutional neural network (CNN) model that both reproduces retinal ganglion cell responses with high accuracy and whose hidden neurons are well correlated with retinal interneurons, by elucidating the computations that are happening between input and output neurons in both the CNN and retina recordings. Issues with clarity make it hard to deduce how successful they are in achieving their stated goals.", "rigor_comment": "The authors introduce a decomposition of the output firing rate into a dynamically weighted sum of the pre-nonlinearity activations of hidden units. This derivation is hampered by an important symbol not being defined (the caligraphic F). Perhaps in equation (1) the caligraphic F was actually supposed to be a caligraphic A. It's not stated if this decomposition is unique. The derivation seems to follow from a use of the Fundamental Theorem of Calculus followed by the chain rule applied to the caligraphic F, but they seem to use that this F evaluated at zero input stimulus is zero, and it isn't clear why this would be the case with a nonzero bias.\n\nThe dynamically weighted sum is truncated based on the magnitude of the terms. Since these terms depend on time t, it isn't clear how this magnitude is taken.\n\nWhile the introduction to Section 2 describes the method as \"we first carve out important sub-circuits using modern attribution methods, and then reduce dimensionality by exploiting spatial invariances...\", in their actual approach it seems like they exploit spatial invariances to reduce dimensionality before carving out the sub-circuits.\n\nOne important comparison to have made I think is with a model that builds filters based off of the input stimuli themselves rather than the hidden unit preactivations. How useful is doing the latter compared to the former? ", "clarity_comment": "The clarity of the work suffers both from missing technical details (see Rigor Comment) as well as inadequate interpretation of the results. Since I've already addressed the former, here I'll focus on the latter.\n\nFirst, it isn't clear to me how novel equation (1) is. This equation, as far as I can tell, charts a way to making an optimal filter that takes in hidden unit activations before the nonlinearity is applied and outputs a response r(t) that is close to (or exactly equal to) the true response. Since the topic of building optimal filters is a rich and well-explored one, it is essential to contextualize equation (1) within this body of work.\n\nSecond, it is difficult to interpret how to think of the \"effective weights\" they derive, and the resulting effective filters. In particular, the curves that are plotted in Figure 2, for instance in A-1, aren't very well explained -- are these the preactivations of the hidden units, or the filters applied to these preactivations? Why is it that the curves found in the reduced \"three hidden unit\" model in Figure A-1 aren't found among the filters in the \"eight hidden unit\" model of figure A-1?\n\nThird, it isn't clear to me how the reduced \"filter model\" they derive is mapped back into a CNN framework. The paper claims to provide a method to \"algorithmically extract computational mechanisms, and consequently conceptual insights, from deep CNN models\" by extracting \"a simple, reduced, minimal subnetwork that is most important in generating a complex CNN\u2019s response to any given stimulus.\" However, the \"subnetwork\" does not appear to me to actually be a subnetwork of the CNN, but rather a different, simpler network model that only overlaps with the CNN at the first layer. Can we really be sure that the CNN is implementing the filters as derived in the reduced model? Validation on a held-out test set may be needed to test this hypothesis.", "clarity": "2: Can get the general idea", "evaluation": "3: Good", "intersection_comment": "The paper seeks to shed light both on what is going on inside both artificial as well as biological neural networks. Their finding of a new neural circuit mechanism that implements omitted stimulus response more robustly certainly sheds light on the latter, but I don't feel like they made a very clear case when it comes to the former.", "intersection": "4: High", "comment": "This work seeks to take a significant step in a very interesting and important direction, but issues with clarity make it hard to deduce how successful they are in making this step. The discovery of a new circuit mechanism that implements omitted stimulus response more robustly feels like a very significant contribution, although my lack of familiarity with this body of work makes it hard for me to judge with confidence.", "technical_rigor": "2: Marginally convincing", "category": "Common question to both AI & Neuro"}, {"title": "An interesting case study in model reduction with potential broader applications", "importance": "3: Important", "importance_comment": "The authors state three high-level improvements they want to make to CNN-based models of neural systems:\n\n1 & 2) Capturing computational mechanisms and extracting conceptual insights. Operationally, I'm not quite sure how these are different, so, to me this goal is roughly \"be explainable\", and progress towards it could be measured e.g. in MDLs.\n\n3) Suggest testable hypotheses.\n\nI agree these are good goals, and I think some progress is made, but that progress seems somewhat limited in scope.", "rigor_comment": "The technical aspects of the paper seem correct, though I have some higher-level conceptual concerns.\n\n1) If I understand correctly, attribution is computed only for a single OSR stimulus video. Is the attribution analysis stable for different stimulus frequencies? If not, is it really an explanation of the OSR?\n\n2) I agree with a concern raised by reviewer 3: It's difficult to see a 1-layer network as a \"mechanistic explanation\" of a 3-layer network.", "clarity_comment": "The flow/high-level organization of the paper works well. Explanations are mostly complete, though some details are missing. e.g. what was the nonlinearity used in the model CNN? Also, do the CNN layers correspond to cell populations, and if so, why is it reasonable to collapse the time dimension after the first layer?", "clarity": "3: Average readability", "evaluation": "3: Good", "intersection_comment": "I believe this paper is addressing questions that many of the workshop attendees will find interesting.", "intersection": "5: Outstanding", "technical_rigor": "2: Marginally convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}