{"forum": "HJlVEQt8Lr", "submission_url": "https://openreview.net/forum?id=HJlVEQt8Lr", "submission_content": {"TL;DR": " Inspired by neuroscience research, solve three key weakness of the widely-cited recurrent attention model by simply adding two terms on the objective function.", "keywords": [], "pdf": "/pdf/f8da9fb722ff669f02f69e208e234890ddc343e4.pdf", "authors": ["Jialin Lu"], "title": "Revisit Recurrent Attention Model from an Active Sampling Perspective", "abstract": "We revisit the Recurrent Attention Model (RAM, Mnih et al. (2014)), a recurrent neural network for visual attention, from an active information sampling perspective. \n\nWe borrow ideas from neuroscience research on the role of active information sampling in the context of visual attention and gaze (Gottlieb, 2018), where the author suggested three types of motives for active information sampling strategies. We find the original RAM model only implements one of them.\n\nWe identify three key weakness of the original RAM and provide a simple solution by adding two extra terms on the objective function. The modified RAM 1) achieves faster convergence, 2) allows dynamic decision making per sample without loss of accuracy, and 3) generalizes much better on longer sequence of glimpses which is not trained for, compared with the original RAM. \n", "authorids": ["luxxxlucy@gmail.com"], "paperhash": "lu|revisit_recurrent_attention_model_from_an_active_sampling_perspective"}, "submission_cdate": 1568211755950, "submission_tcdate": 1568211755950, "submission_tmdate": 1570834985607, "submission_ddate": null, "review_id": ["HJgUw73Lvr", "r1l5oK39wr", "ByxAGCqsvH"], "review_url": ["https://openreview.net/forum?id=HJlVEQt8Lr¬eId=HJgUw73Lvr", "https://openreview.net/forum?id=HJlVEQt8Lr¬eId=r1l5oK39wr", "https://openreview.net/forum?id=HJlVEQt8Lr¬eId=ByxAGCqsvH"], "review_cdate": [1569272669958, 1569536417685, 1569594901571], "review_tcdate": [1569272669958, 1569536417685, 1569594901571], "review_tmdate": [1570047561589, 1570047543836, 1570047533109], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper37/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper37/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper37/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJlVEQt8Lr", "HJlVEQt8Lr", "HJlVEQt8Lr"], "review_content": [{"evaluation": "3: Good", "intersection": "4: High", "importance_comment": "Active sampling presents a framework for improving the efficiency of artificial neural networks in tasks requiring interaction between an agent and an environment. Approaches like this are important to try to reduce the training time needed and online processing requirements for artificial agents to make decisions in the real world.", "clarity": "4: Well-written", "technical_rigor": "2: Marginally convincing", "intersection_comment": "This paper is very clearly at the intersection of neuroscience and artificial intelligence, using a well-defined theory from neuroscience to improve a popular model in the AI literature.", "rigor_comment": "The authors present their interpretation of Gottlieb's three motives for implementing active sampling. They conclude that the Recurrent Attention Model implements the first of these three (increase expected reward), and propose objective functions that achieve the remaining two: 1) reducing the uncertainty of belief states, and 2) something related to the intrinsic utility or disutility of anticipating a positive or negative outcome.\n\nThe quote from Gottlieb (2018) outlining these objectives leaves a lot to interpretation, but the authors present a reasonable method for reducing the uncertainty of belief states that has the bonus feature of providing control over the number of glimpses required to make a decision. However, I do not see how the belief in the sparsity of the output is related to the utility of making a prediction.\n\nNevertheless, the authors show that their new objective function improves the convergence of the recurrent attention model Both new terms in isolation improving convergence individually, although the output sparsity objective appears to do most of the heavy lifting. They also show that by using their uncertainty measure to dynamically determine the number of glimpses they increase test accuracy, most of which seem to be accounted for by minimizing the uncertainty in belief rather than output sparsity. ", "comment": "Strengths:\n\nThe additional objectives inspired by neuroscience make convergence faster on training accuracy and increase test accuracy on MNIST. They also provide a system to dynamically control the number of glimpses required to make a decision.\n\nAreas for improvement:\n\nI would like to see either a more convincing rationale for your sparsity objective, or an objective that directly addresses the intrinsic utility of an outcome. I appreciate that MNIST may not be the best task for this, and also that utility will be task- and agent-specific.", "importance": "3: Important", "title": "Modifications to Recurrent Attention Model inspired by active sampling in neuroscience improves convergence and generalization.", "category": "Neuro->AI", "clarity_comment": "The motivation, method, and results of the paper were well written and easy to follow. The only difficulty I had was in reading figure 1, which was excessively small."}, {"title": "Neuroscience-inspired recurrent attention model", "importance": "3: Important", "importance_comment": "The authors use insights from neuroscience to an important problem in artificial intelligence: the problem of active sampling.\n", "rigor_comment": "The paper is transparent and benchmarks its approach against existing approaches. \n", "clarity_comment": "The paper is conceptually easy to follow, although there are several minor spelling / grammar / typographical issues. \n", "clarity": "3: Average readability", "evaluation": "4: Very good", "intersection_comment": "The paper directly applies a conceptual approach from neuroscience to improve an existing, widely cited technique in active image sampling. \n", "intersection": "5: Outstanding", "comment": "This paper addresses an important problem in artificial intelligence, in the context of existing techniques, and uses neuroscience as inspiration to propose new techniques.\n\nGreater detail in the crucial paragraphs developing the concepts and computations of J-uncertainty and J-intrinsic would be helpful. In particular, why does the new cost term protect the RAM approach from performance degradation with higher numbers of glimpses? This problem is raised and appears to be addressed in the experimental results, but it is unclear why the insights from neuroscience help make this possible. \n\nIn Figure 2, the \u201cboth new terms, dynamic\u201d plot does not extend into the regime where performance degradation is most extreme; while this may be a result of the technique used to evaluate the dynamic case, it somewhat undercuts the claim that they dynamic case is roughly equivalent in performance.", "technical_rigor": "3: Convincing", "category": "Neuro->AI"}, {"title": "Exploration of an interesting extension idea is hindered by issues replicating the original RAM results", "importance": "2: Marginally important", "importance_comment": "Training RAM networks faster and making their execution more flexible is potentially useful, but unfortunately I see the experiments provided as too weak to support this paper's approach.", "rigor_comment": "Unless I'm missing something, it looks like the authors fail to replicate the setup they are trying to extend. Fig. 1 shows a baseline training error rate of 20%. The original RAM paper reports a test error rate of 1%, and linear regression yield an error rate of around 9%, so to me this points to a bug. Confusingly, Fig. 2 shows a baseline training error rate of 5%. Since these are both inconsistent and so far from the original performance measurements, it makes interpreting the extensions' performance measurements very difficult. Also, since the authors refer often to the original paper but never mention this very large performance disparity, the omission seems borderline dishonest.", "clarity_comment": "At a high level, the writing is clear, but some of the technical parts could use a revision pass. e.g. the use of the word \"bound\" on line 112 is potentially confusion, as is the sentence about merging objectives in line 105. The loss names are also a bit unintuitive.", "clarity": "3: Average readability", "evaluation": "2: Poor", "intersection_comment": "This is relevant to the workshop, but it's more of a psychology-inspired approach to a solving machine vision problems than a bridge between ML and neuroscience, and it would fit in about as well e.g. in the main conference at CVPR as it would here.", "intersection": "3: Medium", "technical_rigor": "1: Not convincing", "category": "Neuro->AI"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}