AMSR / conferences_raw /neuroai19 /neuroai19_H1xI7XYULr.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
8.93 kB
{"forum": "H1xI7XYULr", "submission_url": "https://openreview.net/forum?id=H1xI7XYULr", "submission_content": {"abstract": "Animals excel at adapting their intentions, attention, and actions to the environment, making them remarkably efficient at interacting with a rich, unpredictable and ever-changing external world, a property that intelligent machines currently lack. Such adaptation property strongly relies on cellular neuromodulation, the biological mechanism that dynamically controls neuron intrinsic properties and response to external stimuli in a context dependent manner. In this paper, we take inspiration from cellular neuromodulation to construct a new deep neural network architecture that is specifically designed to learn adaptive behaviours. The network adaptation capabilities are tested on navigation benchmarks in a meta-learning context and compared with state-of-the-art approaches. Results show that neuromodulation is capable of adapting an agent to different tasks and that neuromodulation-based approaches provide a promising way of improving adaptation of artificial systems.", "keywords": ["neuromodulation", "deep learning", "reinforcement learning"], "title": "Cellular neuromodulation in artificial networks", "authors": ["Vecoven Nicolas", "Ernst Damien", "Wehenkel Antoine", "Drion Guillaume"], "TL;DR": "This paper introduces neuromodulation in artificial neural networks.", "authorids": ["nicolas.vecoven@gmail.com", "dernst@ulg.ac.be", "antoine.wehenkel@uliege.be", "gdrion@ulg.ac.be"], "pdf": "/pdf/290d2f318fb850252228edd433c06bd35415431e.pdf", "paperhash": "nicolas|cellular_neuromodulation_in_artificial_networks"}, "submission_cdate": 1568211742363, "submission_tcdate": 1568211742363, "submission_tmdate": 1572281881373, "submission_ddate": null, "review_id": ["HJgqZXFwvB", "ryx4WfMKDH", "SJxgMQhKwB"], "review_url": ["https://openreview.net/forum?id=H1xI7XYULr&noteId=HJgqZXFwvB", "https://openreview.net/forum?id=H1xI7XYULr&noteId=ryx4WfMKDH", "https://openreview.net/forum?id=H1xI7XYULr&noteId=SJxgMQhKwB"], "review_cdate": [1569325826390, 1569427964131, 1569469191808], "review_tcdate": [1569325826390, 1569427964131, 1569469191808], "review_tmdate": [1570047559888, 1570047555224, 1570047550890], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper4/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper4/AnonReviewer3"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper4/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["H1xI7XYULr", "H1xI7XYULr", "H1xI7XYULr"], "review_content": [{"evaluation": "3: Good", "intersection": "4: High", "importance_comment": "The proposed model is essentially a constrained/specific parameterisation within the broader class of 'context dependent' models. The heavy lifting is seemingly done by well known architectures: default RNN & a feed-forward NN. \nWhile it does not seemingly add anything conceptual, the exact implementation is arguably new.", "clarity": "3: Average readability", "technical_rigor": "2: Marginally convincing", "intersection_comment": "The paper takes a crudely 'neuroscience inspired' concept (though, admittedly it could simply be 'task structure' inspired) and builds a simple model from it, which it benchmarks on a appropriately designed simplest-working-example. So it fits well with the workshop theme.", "rigor_comment": "The model description is nice and clear. I think a more persuasive bench marking could be done. Perhaps compare to reference models [11] or [10] rather than a 'vanilla' RNN, as this amounts to not using any prior information about the task (which, by construction, we 'know' is useful).\nAlso perhaps report results from one of the 2 (mentioned) more complex benchmarks.", "comment": "I'd say a fairly 'standard' work for the setting. Only real point for improvement is more earnest bench marking/model comparison. Authors could also add some context by considering related works in the computational neuroscience literature, e.g. Stroud et al. Nature Neurosciencevolume 21, pages 1774\u20131783 (2018) and https://arxiv.org/abs/1902.05522 (though the latter is very recent).", "importance": "2: Marginally important", "title": "Clear enough & relevant, if incremental", "category": "Neuro->AI", "clarity_comment": "Paper is clear and quite readable."}, {"evaluation": "3: Good", "intersection": "4: High", "importance_comment": "Its an open question in neuroscience what the purpose of neuromodulation is in learning and behaviour, given that neuroscientists know a lot about their effects on intrinsic properties of neurons. It's also difficult to develop RL agents that generalize across tasks well. This paper addresses these questions along a similar vein to recent approaches (Miconi et al., 2018, 2019).", "clarity": "4: Well-written", "technical_rigor": "3: Convincing", "intersection_comment": "The paper introduces a neuroscience-inspired solution to training RL agents to a behaviourally relevant problem, therefore is well-positioned at the intersection of neuroscience and AI.", "rigor_comment": "The authors implement a reasonable interpretation of the effects of contextual neuromodulation on the intrinsic properties of neurons via a recurrent neural network influencing the gain of learned scale and bias of node activation functions. The benchmark chosen is simple, and the treatment of the problem is rigorously addressed running over many seeds.", "comment": "Strengths:\n\nThe paper is clearly written, well justified, and model is rigorously tested.\n\nAreas for improvement:\nThe results are modest and I would be keen to see how the approach scales to more difficult benchmarks. The method they choose clearly reduces the variance in rewards gained, which is interesting in of itself. I would like to see whether this holds up. Additionally, I would like to see how this method performs when context must be inferred by the agent.", "importance": "3: Important", "title": "Learned, context-dependent activation functions in Meta-RL tasks", "category": "Neuro->AI", "clarity_comment": "The paper is well-written, with clear figures and descriptions of the model, task, and results. "}, {"title": "Novel structure for context-dependent Meta-RL", "importance": "3: Important", "importance_comment": "The paper presents a novel structure for neural networks that can generalize to new tasks. The structure appears new where a first DNN computes some terms, z, based on a context. The term z is then applied to the weights in the layers in a second network. This structure potentially allows learning across new tasks. The methods are tested on a standard Meta-ML benchmark and appear to outperform state-of-the-art methods.", "rigor_comment": "The paper takes on very challenging state-of-the-art problems with a sophisticated network. The tests against the bench marks is rigorously and thoroughly performed.\n", "clarity_comment": "The paper was mostly well explained. I think somewhere early a general statement of the problem would have helped. For example, in Section 2, I think it could have been made more clear what is \"context\" and what is the training data and what is desired goal. How do we measure performance generalization. Also in the training section, some of the details were difficult to follow. But, that could be a result of the space.\n", "clarity": "4: Well-written", "evaluation": "4: Very good", "intersection_comment": "The problem of how algorithms can learn to generalize well across multiple tasks\nand use context is clearly central to both ML and neuroscience. The paper makes a case that the algorithms are \"inspired\" by biological systems. But, if the goal of the paper is to understand how true biological systems work, I think there needs to be more detail on how this architecture would map biologically. But, that obviously is a very hard problem and the results here should still be extremely useful.", "intersection": "4: High", "comment": "An interesting and novel structure for learning across multiple tasks. The results show improvement on state-of-the-art challenging benchmarks. \n\nDetailed strengths and weakness are above.", "technical_rigor": "4: Very convincing", "category": "AI->Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}