AMSR / conferences_raw /neuroai19 /neuroai19_B1em4mFL8H.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
12.5 kB
{
"forum": "B1em4mFL8H",
"submission_url": "https://openreview.net/forum?id=B1em4mFL8H",
"submission_content": {
"TL;DR": "Spiking recurrent neural networks performing a working memory task utilize long heterogeneous timescales, strikingly similar to those observed in prefrontal cortex.",
"keywords": ["Working memory", "recurrent neural networks", "neuronal timescales"],
"pdf": "/pdf/ef66ac4942695d5eb04960a2be81c79594fe43b3.pdf",
"authors": ["Robert Kim", "Terrence J. Sejnowski"],
"title": "Spiking Recurrent Networks as a Model to Probe Neuronal Timescales Specific to Working Memory",
"abstract": "Cortical neurons process and integrate information on multiple timescales. In addition, these timescales or temporal receptive fields display functional and hierarchical organization. For instance, areas important for working memory (WM), such as prefrontal cortex, utilize neurons with stable temporal receptive fields and long timescales to support reliable representations of stimuli. Despite of the recent advances in experimental techniques, the underlying mechanisms for the emergence of neuronal timescales long enough to support WM are unclear and challenging to investigate experimentally. Here, we demonstrate that spiking recurrent neural networks (RNNs) designed to perform a WM task reproduce previously observed experimental findings and that these models could be utilized in the future to study how neuronal timescales specific to WM emerge.",
"authorids": ["rkim@salk.edu", "terry@salk.edu"],
"paperhash": "kim|spiking_recurrent_networks_as_a_model_to_probe_neuronal_timescales_specific_to_working_memory"
},
"submission_cdate": 1568211754724,
"submission_tcdate": 1568211754724,
"submission_tmdate": 1572480221033,
"submission_ddate": null,
"review_id": ["ByglsSCfDr", "rklOb5Yqvr", "SkeXJMa9wS"],
"review_url": ["https://openreview.net/forum?id=B1em4mFL8H&noteId=ByglsSCfDr", "https://openreview.net/forum?id=B1em4mFL8H&noteId=rklOb5Yqvr", "https://openreview.net/forum?id=B1em4mFL8H&noteId=SkeXJMa9wS"],
"review_cdate": [1569019288503, 1569524224050, 1569538523422],
"review_tcdate": [1569019288503, 1569524224050, 1569538523422],
"review_tmdate": [1570047565364, 1570047545571, 1570047542233],
"review_readers": [
["everyone"],
["everyone"],
["everyone"]
],
"review_writers": [
["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper34/AnonReviewer2"],
["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper34/AnonReviewer1"],
["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper34/AnonReviewer3"]
],
"review_reply_count": [{
"replyCount": 0
}, {
"replyCount": 0
}, {
"replyCount": 0
}],
"review_replyto": ["B1em4mFL8H", "B1em4mFL8H", "B1em4mFL8H"],
"review_content": [{
"evaluation": "4: Very good",
"intersection": "2: Low",
"importance_comment": "Interesting and important initial characterization and comparison of neural response dynamics between an RNN and neural data. Was not clear whether similarities were surprising or inevitable: e.g. would an untrained net have diversity of autocorrelation decay. No novel predictions or insights into neural data were provided or at least made clear.",
"clarity": "5: Impeccable",
"technical_rigor": "4: Very convincing",
"intersection_comment": "Used RNN model which is also used in AI but authors never explicitly make connection to AI applications or otherwise. It would have been nice if they gave some indication or speculation of what in the networks changes to achieve different decays in autocorrelation which could then be matched to the natural time scale of AI tasks if prior knowledge was available.",
"rigor_comment": "Results were convincing and carefully performed. The task DMS task seems simple enough (multiplication). It would have been nice to motivate training by gradient descent vs what could presumably be designed by hand. Perhaps it is not so simple? Quantification of variability across networks for different training regimes and architecture would have been nice and provided some indication of how general the results are.",
"comment": "The clarity and simplicity of the measurements made in the study are very nice. It would have been nice if the study had gone a step past characterization and comparison to novel prediction and insight. Whats the function of short time scale units? What in the weights of networks creates different time scales and diversity of time scales? Over all the work is preliminary but of high quality and interest. Good work!",
"importance": "3: Important",
"title": "Interesting characterization and comparison of RNN and neural time scales",
"category": "AI->Neuro",
"clarity_comment": "Very clear exposition, excellent figures. In areas was brief on motivation and model set up but two pages isn\u2019t a lot of space and they cite relevant literature. RNN Model vs experimental model (line 73)? Experimental model is the neural data? Calling it a model is confusing if it is neural data.\n"
}, {
"title": "Potentially interesting work on neuronal timescales, but a disentanglement is needed",
"importance": "4: Very important",
"importance_comment": "In this study, the author(s) trained spiking RNNs with BPTT in delayed/instant tasks, then compared the autocorrelation of the trained units with data from macaque. Though the presented work have some technical and biological issues, I think this work is potentially interesting. ",
"rigor_comment": "The author(s) used the word \u201cintrinsic timescale\u201d and \u201cautocorrelation\u201d interchangeably, but they are different. In particular, even if the intrinsic timescales of the neurons are the same, depending on the connectivity structure, neurons develop different autocorrelation (see eg. R Chaudhuri et al., Neuron, 2015). Moreover, because the author used autocorrelation sigma, instead of the intrinsic synaptic decay constant tau_s, for the analysis, it is impossible to tell if the functional segregation shown in Fig. 3 is originated from optimization of w or tau_s. Thus, I believe further clarification is needed.",
"clarity_comment": "\u201cexperimental model\u201d is a bit misleading; \u201cexperimental data\u201d is probably better.",
"clarity": "4: Well-written",
"evaluation": "4: Very good",
"intersection_comment": "In this study, the author(s) trained RNN with a ML method (BPTT), then compared that with the experimental data. That, I think, is a nice intersection.",
"intersection": "4: High",
"comment": "As mentioned on the rigor section, the author(s) should disentangle the effect of w and tau_s, for instance, by comparing the performance of RNN with or without optimization of tau_s, or analysing the effect of tau_s more directly.\nAnother potential issue is the biological meaning of the optimization of tau_s. For a given pair of neurotransmitter and receptor, the variability of the synaptic time constant is unlikely to be large. I think it is biologically more plausible to optimize AMPA/NMDA ratio, while fixing tau_AMPA and tau_NMDA at their typical values. ",
"technical_rigor": "2: Marginally convincing",
"category": "AI->Neuro"
}, {
"title": "Clear and straightforward modeling exploration, but it is less clear how the work pushes understanding",
"importance": "2: Marginally important",
"importance_comment": "The question of how networks maintain memory over long timescales is a longstanding and important one, and to my knowledge this question hasn't been thoroughly explored in spiking, trained recurrent neural networks (RNN). The importance is tempered by the findings only covering what is to be expected, and not pushing beyond this or describing a path to push beyond this.",
"rigor_comment": "The work would benefit from more detailed discussion of the training algorithm that provides some indication that the results aren't unduly sensitive to these details. In particular, the setting of synaptic decay constants is an important detail in a paper about working memory. A short discussion of other training algorithms (such as surrogate gradient or surrogate loss methods) and why the given one was chosen instead would have been helpful. A comparison with Bellec et al. 2018, which looks at working memory tasks in spiking networks, would also have been appropriate.\n\nThe statistical tools are fairly well described and appear to be well-suited for illustrating the phenomena of interest. I feel that more tools should have been used to further support or push the results. For instance, while the heatmaps in Figure 3 provide visual evidence for their claims (except see my comments below), the work could have benefitted from a quantification of this evidence. For instance, it is hard to see differences between the cue periods in the bottom two heatmaps, but differences may appear in some numerical measure of the average discriminability over these regions.",
"clarity_comment": "The technical details are presented clearly on the whole. However, I feel that the work lacked clarity when it came to interpretation of the results. For instance, the claim of \"stronger cue-specific differences across the cue stimulus window\" between fast and slow intrinsic timescale neurons in the RNN model isn't clearly supported by the heatmap in Figure 3 -- the cue-specific differences for the short instrinsic timescale group to me appears to be at least as great as that of the long intrinsic timescale group within the cue stimulus window. I would be curious to know if making the input weaker or only giving it to a random subset of neurons makes this phenomenon more apparent. \n\nIt seems that one of the main points of the work is that \"longer intrinsic timescales correspond to more stable coding\", but I didn't find that this point was made very convincingly. The work would have benefited from a discussion of the implications of longer intrinsic timescale neurons retaining task-relevant information for longer -- in particular, this finding feels a bit \"trivial\" without the case being made for why this should push understanding in the field. I think the interesting part may be in quantifying just how much of a difference there is between short and long timescale neurons -- for instance, does task-relevant information in both neuron groups fall off in a way that can be well predicted by their intrinsic time constants? How does this relate to their synaptic time constants? Does limiting the synaptic time constants limit the intrinsic time constants, and if so by how much?\n\nThe same type of comments apply to the second part of the results, which demonstrates that a task that doesn't require working memory results in neurons with shorter intrinsic timescales compared to the working memory task.",
"clarity": "4: Well-written",
"evaluation": "3: Good",
"intersection_comment": "The authors use an artificial network model to shed light on the biological mechanisms enabling and shaping working memory in the brain. The paper in the process reveals some (expected) results about how spiking RNNs behave on a working memory task. The proof-of-concept work (among others) that this can be done with spiking RNN may inspire more work in this area.",
"intersection": "4: High",
"comment": "The work is a basic proof-of-concept of results that may not do much to advance understanding since they are what one would expect to see (i.e. the antithesis of their thesis seems very unlikely). Looking into the nuances of the explored phenomena may provide new information for the field. The paper should also seek to connect with more of the recent work being done in spiking recurrent neural networks.",
"technical_rigor": "4: Very convincing",
"category": "Common question to both AI & Neuro"
}],
"comment_id": [],
"comment_cdate": [],
"comment_tcdate": [],
"comment_tmdate": [],
"comment_readers": [],
"comment_writers": [],
"comment_reply_content": [],
"comment_content": [],
"comment_replyto": [],
"comment_url": [],
"meta_review_cdate": null,
"meta_review_tcdate": null,
"meta_review_tmdate": null,
"meta_review_ddate ": null,
"meta_review_title": null,
"meta_review_metareview": null,
"meta_review_confidence": null,
"meta_review_readers": null,
"meta_review_writers": null,
"meta_review_reply_count": null,
"meta_review_url": null,
"decision": "Accept (Poster)"
}