{"forum": "Hyl_XXYLIB", "submission_url": "https://openreview.net/forum?id=Hyl_XXYLIB", "submission_content": {"title": "Continual Learning via Neural Pruning", "authors": ["Siavash Golkar", "Micheal Kagan", "Kyunghyun Cho"], "authorids": ["siavash.golkar@gmail.com", "makagan@slac.stanford.edu", "kyunghyun.cho@nyu.edu"], "keywords": ["life-long learning", "catastrophic forgetting"], "TL;DR": "We use simple and biologically motivated modifications of standard learning techniques to achieve state of the art performance on catastrophic forgetting benchmarks.", "abstract": "Inspired by the modularity and the life-cycle of biological neurons,we introduce Continual Learning via Neural Pruning (CLNP), a new method aimed at lifelong learning in fixed capacity models based on the pruning of neurons of low activity. In this method, an L1 regulator is used to promote the presence of neurons of zero or low activity whose connections to previously active neurons is permanently severed at the end of training. Subsequent tasks are trained using these pruned neurons after reinitialization and cause zero deterioration to the performance of previous tasks. We show empirically that this biologically inspired method leads to state of the art results beating or matching current methods of higher computational complexity.", "pdf": "/pdf/aba957bbea9f76cf45c6a917123ccdb45a713101.pdf", "paperhash": "golkar|continual_learning_via_neural_pruning"}, "submission_cdate": 1568211744008, "submission_tcdate": 1568211744008, "submission_tmdate": 1571771888371, "submission_ddate": null, "review_id": ["HklJsXnYvr", "SkePojUswS", "B1xYHaT9PS"], "review_url": ["https://openreview.net/forum?id=Hyl_XXYLIB¬eId=HklJsXnYvr", "https://openreview.net/forum?id=Hyl_XXYLIB¬eId=SkePojUswS", "https://openreview.net/forum?id=Hyl_XXYLIB¬eId=B1xYHaT9PS"], "review_cdate": [1569469334600, 1569577887377, 1569541441098], "review_tcdate": [1569469334600, 1569577887377, 1569541441098], "review_tmdate": [1570047550691, 1570047534707, 1570047533842], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper8/AnonReviewer2"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper8/AnonReviewer1"], ["NeurIPS.cc/2019/Workshop/Neuro_AI/Paper8/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["Hyl_XXYLIB", "Hyl_XXYLIB", "Hyl_XXYLIB"], "review_content": [{"title": "Interesting, albeit straightforward approach to minimizing interference", "importance": "3: Important", "importance_comment": "The authors use sparsification to study continual learning. They claim this is superior to previous approaches that expand networks for subsequent tasks or penalize changes in previous weights. That being said, I am not convinced that this approach is really that different from previous approaches that expand network size with new tasks, since the authors are essentially forcing each task to use largely nonoverlapping subsets of the network", "rigor_comment": "The authors compare their results on permuted MNIST and split CIFAR. For the latter, the results are compared only to Zenke et al. 2017. It would have been nice to see a comparison to a network with non-fixed architecture but comparable network size after training on all tasks.", "clarity_comment": "The paper is well-written. However, additional discussion about the central assumption of the model, that the \"interference\" weights can be set to zero and ignored, would be helpful. ", "clarity": "4: Well-written", "evaluation": "3: Good", "intersection_comment": "The authors attempt to connect their results to neuroscience by noting the plausibility of their approach. However, the results seem to suggest a sparsening of representations from lower to higher layers in the network, which at least for the visual system seems it may be counter to the experimental findings. Also, there is no discussion of the biological process corresponding to the determination of which weights are \"interference\" weights during the learning of a new task.", "intersection": "3: Medium", "technical_rigor": "3: Convincing", "category": "AI->Neuro"}, {"title": "Impressive step forward", "importance": "4: Very important", "importance_comment": "This is a clever idea, implemented well, and showing good progress on an extremely difficult and important problem.", "rigor_comment": "The methodology and analysis are as rigorous as field standards. I might have liked to see plots of the validation performance as a function of the three hyper parameters optimised using grid search, to get a feeling for the robustness of the methods (the plot in Fig 3a implies that the results are quite sensitive to these choices).", "clarity_comment": "This is an excellently written paper, carefully covering the background literature, well-paced intuitive explanation of the key idea, and straightforward presentation of the results.", "clarity": "4: Well-written", "evaluation": "5: Excellent", "intersection_comment": "The innovations are biologically inspired, but it is clearly an ML paper. It is not obvious to me that the findings have any direct implications for our understanding of the brain.", "intersection": "3: Medium", "comment": "It would be great to back up these empirical findings with some mathematical analysis, even on a toy version of the model. The idea makes intuitive sense, but fully exploiting it and indeed understanding its limitations is going to be hard to do with experiments alone. This may for example help with principled selection of the hyper parameters depending on the data structure.", "technical_rigor": "4: Very convincing", "category": "Neuro->AI"}, {"title": "Interesting proposal to do continual learning by pruning with preliminary promising results", "importance": "4: Very important", "importance_comment": "This paper attempts to address an important problem. The method proposed are intuitive and reasonable, which could potentially inspire future work.", "rigor_comment": "- The authors tested the method in a two sets of experiments. The task is created based on permutation/split of images, thus the tasks are quite similar. Did the authors tested quite different tasks, for example, learning to classify MNIST then CIFAR and so on?\n- In terms of parameter m, the authors used 0.05%-2%. Would these numbers generalize to new tasks?\n", "clarity_comment": "I found the writing is generally clear. It is not difficult to follow the paper.", "clarity": "3: Average readability", "evaluation": "3: Good", "intersection_comment": "The paper would be stronger if the authors could refer to some neuroscience literature on pruning of synapse in the brain.", "intersection": "3: Medium", "comment": "The paper proposes a new method to perform lifelong learning. The basic idea is to prune the neurons of zero or low activity and use these neurons for later tasks. The pruning procedure leads to a set of weights which could be changed freely without causing any change to the output of the network.\nI have not been following all the previous work on continual learning. But I really like the idea and the approach the authors are taking. The results shown in Fig. 3 are promising. Overall, I think this is a strong submission.\n", "technical_rigor": "3: Convincing", "category": "Common question to both AI & Neuro"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": null, "meta_review_tcdate": null, "meta_review_tmdate": null, "meta_review_ddate ": null, "meta_review_title": null, "meta_review_metareview": null, "meta_review_confidence": null, "meta_review_readers": null, "meta_review_writers": null, "meta_review_reply_count": null, "meta_review_url": null, "decision": "Accept (Poster)"}