AMSR / conferences_raw /neuroai19 /neuroai19_Syl0NmtLIr.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
10.5 kB
{"forum": "Syl0NmtLIr", "submission_url": "https://openreview.net/forum?id=Syl0NmtLIr", "submission_content": {"TL;DR": "LSTMs can more effectively model the working memory if they are learned using reinforcement learning, much like the dopamine system that modulates the memory in the prefrontal cortex ", "keywords": ["deep learning", "working memory", "recurrent neural networks", "reinforcement learning", "brain modelling"], "pdf": "/pdf/03cdc8471aaf39a0a8f684b1bba0016f3849668f.pdf", "authors": ["Pravish Sainath", "Pierre Bellec", "Guillaume Lajoie"], "title": "Modelling Working Memory using Deep Recurrent Reinforcement Learning", "abstract": "In cognitive systems, the role of a working memory is crucial for visual reasoning and decision making. Tremendous progress has been made in understanding the mechanisms of the human/animal working memory, as well as in formulating different frameworks of artificial neural networks. In the case of humans, the visual working memory (VWM) task is a standard one in which the subjects are presented with a sequence of images, each of which needs to be identified as to whether it was already seen or not. \n\nOur work is a study of multiple ways to learn a working memory model using recurrent neural networks that learn to remember input images across timesteps. We train these neural networks to solve the working memory task by training them with a sequence of images in supervised and reinforcement learning settings. The supervised setting uses image sequences with their corresponding labels. The reinforcement learning setting is inspired by the popular view in neuroscience that the working memory in the prefrontal cortex is modulated by a dopaminergic mechanism. We consider the VWM task as an environment that rewards the agent when it remembers past information and penalizes it for forgetting. \n \nWe quantitatively estimate the performance of these models on sequences of images from a standard image dataset (CIFAR-100). Further, we evaluate their ability to remember and recall as they are increasingly trained over episodes. Based on our analysis, we establish that a gated recurrent neural network model with long short-term memory units trained using reinforcement learning is powerful and more efficient in temporally consolidating the input spatial information. \n\nThis work is an initial analysis as a part of our ultimate goal to use artificial neural networks to model the behavior and information processing of the working memory of the brain and to use brain imaging data captured from human subjects during the VWM cognitive task to understand various memory mechanisms of the brain. \n", "authorids": ["pravishsainath@gmail.com", "pierre.bellec@criugm.qc.ca", "lajoie@dms.umontreal.ca"], "paperhash": "sainath|modelling_working_memory_using_deep_recurrent_reinforcement_learning"}, "submission_cdate": 1568211765983, "submission_tcdate": 1568211765983, "submission_tmdate": 1574782679531, "submission_ddate": null, "review_id": ["SyxBLsCUvH", "rJeFy3jtPH", "S1eWefntvB"], "review_url": ["https://openreview.net/forum?id=Syl0NmtLIr&noteId=SyxBLsCUvH", "https://openreview.net/forum?id=Syl0NmtLIr&noteId=rJeFy3jtPH", "https://openreview.net/forum?id=Syl0NmtLIr&noteId=S1eWefntvB"], "review_cdate": [1569282892520, 1569467360936, 1569468904933], "review_tcdate": [1569282892520, 1569467360936, 1569468904933], "review_tmdate": [1570047561176, 1570047551721, 1570047551314], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper61/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper61/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper61/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Syl0NmtLIr", "Syl0NmtLIr", "Syl0NmtLIr"], "review_content": [{"evaluation": "2: Poor", "intersection": "3: Medium", "importance_comment": "The paper addresses the potential for modelling working visual memory processes with recurrent neural network architectures. Understanding these mechanisms is an important task in neuroscience, but I do not believe that the computational modeling approaches presented here are sufficient for publication.", "clarity": "2: Can get the general idea", "technical_rigor": "2: Marginally convincing", "intersection_comment": "The presented models are clearly inspired by the structure and function of cortical visual stimulus processing regions, but that connection is at best one directional. No loop back to biological function is made in this paper regarding validation of their presented models. They state that the goal of this work is to apply these systems to the analysis of actual recorded data; such work may provide the full connection required to validate some aspects of the work presented here.", "rigor_comment": "The development of the models presented is not discussed and only references other work. Model implementation is also not discussed.\n\nThe results presented show evidence of systems learning more accurate representations of the labeled data that has been presented to them, but the differences between model performance metrics is not adequately explored here. Statistical significance statements are non-existent and no form of hypothesis test is presented regarding model accuracy. One can assume as much that a conjugate prior for the binary classification task can easily enough produce a 95% confidence statement regarding \"chance\" in this sort of test, but no such statement is presented.", "importance": "2: Marginally important", "title": "Interesting result, lacks composition", "category": "AI->Neuro", "clarity_comment": "While the overall goal of the paper is clearly stated in the abstract, the publication loses clarity after the introduction. The \"models\" section is structured in an unusually segmented manner that fails to adequately detail the functional or structural similarities between them. Furthermore, the language use is poor. Overlapping clauses and run-on sentences dominate much of the text in this section.\n\nThe results section lacks a clear message comparing the performance of the different models. The figures show positive results regarding the ability of the models to perform a binary classification task on the CIFAR-100 image database, but the comparisons are incomplete. Moreover, the lack of any baseline comparison metric or statistical significance statements makes their importance hard to interpret. Figure captions themselves are hard to interpret, as they contain incomplete sentences and don't fully detail the information shown in the figure.\n\nThe biggest lack of clarity here is found in the gulf between motivation and result: if they're ostensibly attempting to model an organism's visual working memory functions, they haven't stated what organism that is and furthermore haven't made any real connection in the work between their presented models and the physiological systems that they're attempting to model. The only mention of biomimetic form or function is made in passing, and the reader is led to assume entirely the results of cited works by Braver and D'Ardenne. No confirmation or recreation of those cited results is attempted here."}, {"title": "Potentially important result but hard to assess rigor because it lacks detail.", "importance": "3: Important", "importance_comment": "The authors test different mechanistic models of working memory. They conclude that a model trained with reinforcement learning outperforms the other models. However, the conclusions are hard to assess because the submission lacks detail.", "rigor_comment": "It is difficult to assess rigor.", "clarity_comment": "The submission lacks clarity and detail. ", "clarity": "2: Can get the general idea", "evaluation": "2: Poor", "intersection_comment": "Testing different mechanistic models of working memory is important for both Neuro and AI.", "intersection": "4: High", "comment": "One strength of the work is that the authors relate their models to biological models of working memory. However, the conclusions would be stronger if more details were included: 1) more details on the testing procedure, 2) what statistical test was used to arrive at the conclusions, and 3) what are the errorbars in Figures 2-4?", "technical_rigor": "2: Marginally convincing", "category": "AI->Neuro"}, {"title": "Poor choice of task to study an interesting topic", "importance": "2: Marginally important", "importance_comment": "The paper focuses on working memory and reinforcement learning, which is an interesting topic. However, the choice of task is not a good one to probe this topic. The task as described is simply a familiarity detection task, and there is no reason to think that this is a good task for RL. In general, I do not see much that this study provides beyond previous work on deep RL.", "rigor_comment": "As described above, the choice of tasks is poor, which likely contributes to the modest improvement using RL as compared to supervised learning (it is not clear if this improvement in statistically significant). The paper does not provide intuition for the reason behind this purported improvement.", "clarity_comment": "Insufficient intuition for the results is presented. The authors are somewhat loose with their definition of a \"context\".", "clarity": "2: Can get the general idea", "evaluation": "2: Poor", "intersection_comment": "The general topic has the potential for interdisciplinary interest, but only if the task were changed to something more appropriate. The authors do not analyze the representations formed in the networks they study, which would be a necessary step to connect their approaches to biology.", "intersection": "3: Medium", "technical_rigor": "2: Marginally convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}