{"forum": "B1M9FjC5FQ", "submission_url": "https://openreview.net/forum?id=B1M9FjC5FQ", "submission_content": {"title": "Gradient Acceleration in Activation Functions", "abstract": "Dropout has been one of standard approaches to train deep neural networks, and it is known to regularize large models to avoid overfitting. The effect of dropout has been explained by avoiding co-adaptation.\nIn this paper, however, we propose a new explanation of why dropout works and propose a new technique to design better activation functions. First, we show that dropout can be explained as an optimization technique to push the input towards the saturation area of nonlinear activation function by accelerating gradient information flowing even in the saturation area in backpropagation. Based on this explanation, we propose a new technique for activation functions, {\\em gradient acceleration in activation function (GAAF)}, that accelerates gradients to flow even in the saturation area. Then, input to the activation function can climb onto the saturation area which makes the network more robust because the model converges on a flat region. \nExperiment results support our explanation of dropout and confirm that the proposed GAAF technique improves performances with expected properties.", "keywords": ["Gradient Acceleration", "Saturation Areas", "Dropout", "Coadaptation"], "authorids": ["s.hahn@handong.edu", "hchoi@handong.edu"], "authors": ["Sangchul Hahn", "Heeyoul Choi"], "pdf": "/pdf/4cad9da8928bcf932b867fdf25ced5b110de0fc7.pdf", "paperhash": "hahn|gradient_acceleration_in_activation_functions", "_bibtex": "@misc{\nhahn2019gradient,\ntitle={Gradient Acceleration in Activation Functions},\nauthor={Sangchul Hahn and Heeyoul Choi},\nyear={2019},\nurl={https://openreview.net/forum?id=B1M9FjC5FQ},\n}"}, "submission_cdate": 1538087810111, "submission_tcdate": 1538087810111, "submission_tmdate": 1545355379474, "submission_ddate": null, "review_id": ["HJlSV3RF2X", "ryxFeM5qh7", "HkemxceqhQ"], "review_url": ["https://openreview.net/forum?id=B1M9FjC5FQ¬eId=HJlSV3RF2X", "https://openreview.net/forum?id=B1M9FjC5FQ¬eId=ryxFeM5qh7", "https://openreview.net/forum?id=B1M9FjC5FQ¬eId=HkemxceqhQ"], "review_cdate": [1541168173456, 1541214705340, 1541175787316], "review_tcdate": [1541168173456, 1541214705340, 1541175787316], "review_tmdate": [1541715415753, 1541533966736, 1541533966530], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1M9FjC5FQ", "B1M9FjC5FQ", "B1M9FjC5FQ"], "review_content": [{"title": "Interesting idea, but seems less precise than previous work", "review": "This paper offers the argument that dropout works not due to preventing coadaptation, but because it gives more gradient, especially in the saturated region. However, previous works have already characterized how dropout modifies the activation function, and also the gradient in a more precise way than what is proposed in this paper. \n\n## Co-adaptation\nco-adaptation does not seem to mean correlation among the unit activations. It is not too surprising units need more redundancy with dropout, since a highly useful feature might not always be present, but thus need to be replicated elsewhere.\n\nSection 8 of this paper gives a definition of co-adaptation,\nbased on if the loss is reduced or increased based on a simultaneous change in units.\nhttps://arxiv.org/abs/1412.4736\nAnd this work, https://arxiv.org/abs/1602.04484, reached a conclusion similar to yours\nthat for some notion of coadaptation, dropout might increase it.\n\n## Gradient acceleration\nIt does not seem reasonable to measure \"gradient information flow\" simply as the norm of the gradient, which is sensitive to scales, and it is not clear if the authors accounted for scaling factor of dropout in Table 2.\n\nThe proposed resolution, to add this discontinuous step function in (7) with floor is a very interesting idea backed by good experimental results. However, I think the main effect is in adding noise, since the gradient with respect to this function is not meaningful. The main effect is optimizing with respect to the base function, but adding noise when computing the outputs. Previous work have also looked at how dropout noise modifies the effective activation function (and thus its gradient). This work, http://proceedings.mlr.press/v28/wang13a.html, give a more precise characterization instead of treating the effect as adding a function with constant gradient multiplied by an envelop. In fact, the actual gradient with dropout does involve the envelope by chain rule, but the rest is not actually constant as in GAAF. \n", "rating": "3: Clear rejection", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "interesting analysis on dropout", "review": "This paper gives further analysis on dropout and explains why it works although Hinton et al. already showed some analysis. This paper also introduced a new gradient acceleration in activation function (GAAF).\n\nOn Table 4, the GAAF is a bit worse than dropout although GAAF converges fast. But i am not sure whether GAAF is really useful on large datasets, not on a small dataset, e.g., MINIST here. On table 5, i am not sure whether you compared with dropout or not. Is your base model already including dropout?\n\nIf you want to demonstrate that GAAF is really helpful, i think more experiments and comparisons, especially on larger datsets should be conducted.\n\n\n\n", "rating": "5: Marginally below acceptance threshold", "confidence": "3: The reviewer is fairly confident that the evaluation is correct"}, {"title": "No proper grounding of the presented argument against \" avoiding co-adaptation through dropout\" concept. Very weak experiments.", "review": "The authors attempt to propose an alternative explanation for the effect of dropout in a neural network and then present a technique to improve existing activation functions.\n\nSection 3.1 presents a experimental proof of higher co-adaptation in presence of dropout, in my opinion this is an incorrect experiment and request authors to double check. In my experience, using dropout results in sparse representations in the hidden layers which is the effect decreased co-adaptions. Also, a single experiment with MNIST data-set cannot be a proof to reject a theory.\n\nSection 3.2 Table 2 presents a comparison between average gradient flow through layers during training where flow with dropout is higher. This is not very surprising, in my opinion, given the variance of the activation of a neuron in presence of dropout the network tries to optimize the classification cost while trying to reduce the variance. The experimental details are almost nil.\n\nThe experiments section 5 presents very weak results. Very little or no improvement and authors randomly introduce BatchNorm into one of the experiment.", "rating": "2: Strong rejection", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": ["rJxcylyJJV", "BJe4SUy367", "r1x33UJ26m", "rylasBk2pm"], "comment_cdate": [1543593953585, 1542350395745, 1542350515755, 1542350244820], "comment_tcdate": [1543593953585, 1542350395745, 1542350515755, 1542350244820], "comment_tmdate": [1543593953585, 1542350637843, 1542350588314, 1542350525920], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2019/Conference/Paper471/AnonReviewer1", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper471/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper471/Authors", "ICLR.cc/2019/Conference"], ["ICLR.cc/2019/Conference/Paper471/Authors", "ICLR.cc/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "No direct reply to concerns.", "comment": "Since the authors did not address the concerns in my review directly. I choose to stick to the given rating. "}, {"title": "Thank you for the feedback.", "comment": "Thanks for your helpful review.\n\nWe do not want to reject the original explanation of Dropout, but we want to suggest another view of Dropout and within this view, we suggest a new method that can improve the model\u2019s performance.\n\nFirst of all, we think the sparse representations in hidden layers with Dropout may not be the result of avoiding co-adaptation but the result of better training. A well-optimized model can have sparse representations. We tried to show that the better optimization with Dropout can be explained by increasing gradient information flows in Section 3.\n\nTable 2 proves our explanation about Dropout (the effect of Dropout can be explained by increasing gradient flow rather than avoiding co-adaptation), and the increasing of gradient flow is the basic idea of our proposed method.\n\nIn Tabel 5, GAAF improves the model\u2019s accuracy compared to the base model with all three datasets. Also, we tried to show that as we know the effect of traditional Dropout can be eliminated when Batch Normalization (BN) is applied, but GAAF optimizes the model further even with BN. This is why we applied BN to this experiment. The accuracies of BN + GAAF models are higher than BN-only models.\n"}, {"title": "Thanks for your review.", "comment": "Thanks for your helpful review and introducing other previous works related to ours.\n\nIn section 2.3, we introduced the noisy activation function and we analyzed that drop out\u2019s effect is quite similar to noise injection in activation function. We also think that our proposed method also quite similar to noisy activation function and we expected the same effect. The noisy activation function increases gradient flow on the saturation areas with stochastic noise, while GAFF does with deterministic method. \n\nFor the scale issue that you mentioned, we think it does not affect because we used the same model size and configurations for the Dropout model and the base model. \n\nAlso, we think that our analysis of Dropout looks similar to previous works that you introduced, but our contribution is that in line with such analysis, we designed a new activation function which can replace the Dropout layer.\n"}, {"title": "Thanks for your review.", "comment": "First of all, we thank you for your interests and helpful reviews.\n\nIn Table 4, we tried to show that our proposed GAAF can improve the performance of model as much as Dropout does, while the model can be converged much faster than Dropout.\n\nIn Table 5, the base model does not include dropout, because we use Convolutional Neural Networks (CNNs). In Tabel 5, we show that the effect of traditional Dropout can be reduced with Batch Normalization (BN), but GAAF works even with BN. \n\nThank you for your suggestion, we will apply our method to larger datasets like ImageNet. As the effect of Dropout decreases with larger datasets, the effect of GAAF might be reduced with larger datasets. In the current submission, however, our experiments confirm that GAAF helps models optimize better, although it is with small datasets. \n"}], "comment_replyto": ["HkemxceqhQ", "HkemxceqhQ", "HJlSV3RF2X", "ryxFeM5qh7"], "comment_url": ["https://openreview.net/forum?id=B1M9FjC5FQ¬eId=rJxcylyJJV", "https://openreview.net/forum?id=B1M9FjC5FQ¬eId=BJe4SUy367", "https://openreview.net/forum?id=B1M9FjC5FQ¬eId=r1x33UJ26m", "https://openreview.net/forum?id=B1M9FjC5FQ¬eId=rylasBk2pm"], "meta_review_cdate": 1544825551804, "meta_review_tcdate": 1544825551804, "meta_review_tmdate": 1545354529797, "meta_review_ddate ": null, "meta_review_title": "Paper decision", "meta_review_metareview": "Reviewers are in a consensus and recommended to reject. Please take reviewers' comments into consideration to improve your submission should you decide to resubmit.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2019/Conference/Paper471/Area_Chair1"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1M9FjC5FQ¬eId=BkguA5sZxN"], "decision": "Reject"}