{"forum": "B1gzLaNYvr", "submission_url": "https://openreview.net/forum?id=B1gzLaNYvr", "submission_content": {"title": "TSInsight: A local-global attribution framework for interpretability in time-series data", "authors": ["Shoaib Ahmed Siddiqui", "Dominique Mercier", "Andreas Dengel", "Sheraz Ahmed"], "authorids": ["shoaib_ahmed.siddiqui@dfki.de", "dominique.mercier@dfki.de", "andreas.dengel@dfki.de", "sheraz.ahmed@dfki.de"], "keywords": ["Deep Learning", "Representation Learning", "Convolutional Neural Networks", "Time-Series Analysis", "Feature Importance", "Visualization", "Demystification"], "TL;DR": "We present an attribution technique leveraging sparsity inducing norms to achieve interpretability.", "abstract": "With the rise in employment of deep learning methods in safety-critical scenarios, interpretability is more essential than ever before. Although many different directions regarding interpretability have been explored for visual modalities, time-series data has been neglected with only a handful of methods tested due to their poor intelligibility. We approach the problem of interpretability in a novel way by proposing TSInsight where we attach an auto-encoder with a sparsity-inducing norm on its output to the classifier and fine-tune it based on the gradients from the classifier and a reconstruction penalty. The auto-encoder learns to preserve features that are important for the prediction by the classifier and suppresses the ones that are irrelevant i.e. serves as a feature attribution method to boost interpretability. In other words, we ask the network to only reconstruct parts which are useful for the classifier i.e. are correlated or causal for the prediction. In contrast to most other attribution frameworks, TSInsight is capable of generating both instance-based and model-based explanations. We evaluated TSInsight along with other commonly used attribution methods on a range of different time-series datasets to validate its efficacy. Furthermore, we analyzed the set of properties that TSInsight achieves out of the box including adversarial robustness and output space contraction. The obtained results advocate that TSInsight can be an effective tool for the interpretability of deep time-series models.", "pdf": "/pdf/bca86c726e22ebe6140fda9db57283806b943d19.pdf", "paperhash": "siddiqui|tsinsight_a_localglobal_attribution_framework_for_interpretability_in_timeseries_data", "original_pdf": "/attachment/bca86c726e22ebe6140fda9db57283806b943d19.pdf", "_bibtex": "@misc{\nsiddiqui2020tsinsight,\ntitle={{\\{}TSI{\\}}nsight: A local-global attribution framework for interpretability in time-series data},\nauthor={Shoaib Ahmed Siddiqui and Dominique Mercier and Andreas Dengel and Sheraz Ahmed},\nyear={2020},\nurl={https://openreview.net/forum?id=B1gzLaNYvr}\n}"}, "submission_cdate": 1569439049706, "submission_tcdate": 1569439049706, "submission_tmdate": 1577168282527, "submission_ddate": null, "review_id": ["HkgYPG-3YH", "SJl2U01TFH", "H1lLPJJ0KS"], "review_url": ["https://openreview.net/forum?id=B1gzLaNYvr¬eId=HkgYPG-3YH", "https://openreview.net/forum?id=B1gzLaNYvr¬eId=SJl2U01TFH", "https://openreview.net/forum?id=B1gzLaNYvr¬eId=H1lLPJJ0KS"], "review_cdate": [1571717729018, 1571778132012, 1571839837628], "review_tcdate": [1571717729018, 1571778132012, 1571839837628], "review_tmdate": [1574255580509, 1572972581182, 1572972581139], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["ICLR.cc/2020/Conference/Paper551/AnonReviewer1"], ["ICLR.cc/2020/Conference/Paper551/AnonReviewer3"], ["ICLR.cc/2020/Conference/Paper551/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["B1gzLaNYvr", "B1gzLaNYvr", "B1gzLaNYvr"], "review_content": [{"experience_assessment": "I have read many papers in this area.", "rating": "3: Weak Reject", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "review_assessment:_thoroughness_in_paper_reading": "I made a quick assessment of this paper.", "title": "Official Blind Review #1", "review": "In this paper, the authors proposed an algorithm for identifying important inputs for the time-series data as an explanation of the model's output.\nGiven a fixed model, the authors proposed to put an auto-encoder to the input of the model, so that the input data is first transformed through the auto-encoder, and the transformed input is then fed to the model.\nIn the proposed algorithm, the auto-encoder is trained so that (i) the prediction loss on the model's output to be small, (ii) the reconstruction loss of the auto-encoder to be small, and (iii) the transformed input of the auto-encoder to be sufficiently sparse (i.e. it has many zeros).\n\nI am not very sure if the proposed algorithm can generate reasonable explanation, for the following two reasons.\n\nFirst, the auto-encoder transforms the input into sparse, which can completely differ from any of the \"natural\" data, as shown in Fig1(b).\nI am not very sure whether studying the performance of the model for such an \"outlying\" input is informative.\n\nSecond, it seems the authors implicitly assumed that zero input is irrelevant to the output of the model.\nHowever, zero input can have a certain meaning to the model, and thus naively introducing the sparsity to the model input may bias the model's output.\n\nI think the paper lacks deeper considerations on the use of sparsity.\nThus, to me, the soundness of the proposed approach is not very clear to me.\n\n\n### Updated after author response ###\nThrough the communication with the author, I found that the author seems to be confusing the two different approaches *sparsifying the feature importance* and *sparsifying the input data*.\nThis paper considers the latter approach, which can be biased as I mentioned above.\nThe authors tried to justify the approach by raising related studies, which is however the mix of the two different approaches.\nI expect the author to clarify the distinction between the two approaches, and provide strong evidences that the sparsity is not a harm (if there is any).", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory."}, {"experience_assessment": "I have read many papers in this area.", "rating": "1: Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #3", "review_assessment:_checking_correctness_of_derivations_and_theory": "I assessed the sensibility of the derivations and theory.", "review": "The paper presents a new approach for improving the interpretability of deep learning methods used for time series. The is mainly concerned with classification tasks for time series. First, the classifier is learned in a usual way. Subsequently, a sparse auto-encoder is used that encodes the last layer of the classifier. For training the auto-encoder the classifier is fixed and there is a decoding loss as well as a sparsity loss. The sparse encoding of the last layer is supposed to increase the interpretability of the classification as it indicates which features are important for the classification.\n\nIn general, I think this paper needs to be considerably improved in order to justify a publication. I am not convinced about the interpretability of the sparse extracted feature vector. It is not clear to me why this should be more interpretable then other methods. The paper contains many plots where the compare to other attributes methods, but it is not clear why the presented results should be any better as other methods (for example Fig 3). The paper is missing also a lot of motivation, the results are not well explained (e.g. Fig 3) and it needs to be improved in terms of writing. Equation 1 is not motivated (which is the main part of the paper) and it is not clear how Figure 2a has been generated and why this represented should be \" an interesting one, doesn\u2019t help with the interpretability of the model\". The authors have to improve the motivation part as well as the discussion of the results.\n\nMore comments below:\n- The method seems to suffer from a severe parameter tuning problem, which makes it hard to use in practise.\n- It is unclear to me why the discriminator is fixed during training the encoder and decoder. Shouldnt it improve performance to also adapt the discriminator to the new representation.\n- Why can we not just add a layer with a sparsity constraint one layer before the \"encoded\" layer such that we have the same architecture and optimize that end to end? At least comparison to such an approach would be needed to justify something more complex. \n- The plots need to be better explained. While the comparisons seems to be exhaustive, it is already too many plots and it is very easy to get lost. Also, the quality of the plots need to be improved (e.g. font size)\n\n"}, {"experience_assessment": "I have read many papers in this area.", "rating": "1: Reject", "review_assessment:_thoroughness_in_paper_reading": "I read the paper at least twice and used my best judgement in assessing the paper.", "review_assessment:_checking_correctness_of_experiments": "I assessed the sensibility of the experiments.", "title": "Official Blind Review #2", "review_assessment:_checking_correctness_of_derivations_and_theory": "I carefully checked the derivations and theory.", "review": "The aim of this work is to improve interpretability in time series prediction. To do so, they propose to use a relatively post-hoc procedure which learns a sparse representation informed by gradients of the prediction objective under a trained model. In particular, given a trained next-step classifier, they propose to train a sparse autoencoder with a combined objective of reconstruction and classification performance (while keeping the classifier fixed), so as to expose which features are useful for time series prediction. Sparsity, and sparse auto-encoders, have been widely used for the end of interpretability. In this sense, the crux of the approach is very well motivated by the literature.\n\n* Pros\n\t* The work provides extensive comparison to a battery of other methods for model prediction interpretation. \n\t* The method is conceptually simple and is easy to implement. It is also general and can be applied to any prediction model (though this is more property of the sparse auto-encoder).\n\t* Despite its simplicity and generality, the method is shown to perform well on average, though it sometimes performs significantly worse than simple baselines.\n\n* Cons\n\t* The method itself is not explained very well. The authors use language such as \u201cattach the auto encoder to the classifier\u201d, which is a bit vague and could mean a number of things. It would be helpful if they provided either a formal definition of the model or a architectural diagram.\n\t* Though the quantitative evaluation is not entirely flattering, the authors should not be punished for providing experiments on many datasets. That said, if their contribution is then rather one of technical novelty, i.e. a sparse-autoencoder-based framework for time series interpretability, it would be helpful for them to \n\t\t* More formally define their framework / class of solutions\n\t\t* Provide a more in depth study of possible variants of the method (this is elaborated on in the \u201cQuestions\u201d section)\n\t\t* More strongly argue the novelty of their method\n\t* The authors provide a discussion on automatic hyper-parameter tuning that seems a bit out of place in the main method section, since it is not mentioned much thereafter and is claimed to not bring benefits.\n\t* The qualitative evaluation made by authors is rather vague:\n\t\t* \"Alongside the numbers, TSInsight was also able to produce the most plausible explanations\u201d\n\t\n* Additional Remarks\n\t* Why not train things jointly? Does this have to be done post-hoc? The authors state that they \u201cshould expect a drop in performance since the input distribution changes\u201d -> so why not at least try fine-tune and study the effect of training the classifier with sparse representations end-to-end? Exploring whether things can be trained jointly, or in other configurations, might allow the authors to frame their work as more of a general technical contribution.\n\t* It would be nice to have the simple baseline of a classifier with a sparsity constraint, i.e. \n\t\t* I.e. ablate the reconstruction loss\n\nI\u2019ve given a reject because 1) the explanation of the method is not very precise and could be greatly improved, 2) the quantitative evaluation is not sufficiently convincing, given the lack of technical novelty), and 3) the qualitative evaluation is hand-wavy. "}], "comment_id": ["r1xe7HI3oB", "r1gsCFajoH", "SyxBHBPssH", "rkgPcVUoir", "HyxTaX8jsS", "ryg2UQIssr", "SJg4jGUisr"], "comment_cdate": [1573836056436, 1573800403006, 1573774653091, 1573770382659, 1573770180799, 1573770067727, 1573769884073], "comment_tcdate": [1573836056436, 1573800403006, 1573774653091, 1573770382659, 1573770180799, 1573770067727, 1573769884073], "comment_tmdate": [1573836056436, 1573800403006, 1573774653091, 1573770382659, 1573770180799, 1573770067727, 1573769884073], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["ICLR.cc/2020/Conference/Paper551/AnonReviewer2", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper551/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper551/AnonReviewer1", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper551/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper551/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper551/Authors", "ICLR.cc/2020/Conference"], ["ICLR.cc/2020/Conference/Paper551/Authors", "ICLR.cc/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "response from reviewer", "comment": "1. ** \"we are quite disappointed with the reviewer\u2019s comment on the lack of novelty and unconvincing quantitative evaluation\" **\nJust to be clear, in my original review, I acknowledged the positive aspect of the thoroughness of your quantitative experiments. I stand by my comment that the proposed method lacks novelty.\n\nThe sparse autoencoder is not a new model. This work applies it to the domain of interpretability. Therefore, there is limited technical novelty. That said, if your work actually proposed a general framework for using sparse autoencoders for interpretability, there could potentially be novelty in terms of formulation; however, your work completely lacks any sort of formal presentation of a formulation (not to mention the model itself). Therefore, any novelty in the formulation proposed is *not properly communicated*. \n\n\n2. ** \"Automated hyperparameter tuning is an important avenue for this work. However, we weren\u2019t able to obtain any interesting results through it. It is mainly intended to provide a future direction for the work.\" **\n\nI would argue that it then should not appear in the main method section. Again, this may be a problem of writing and communication, as it muddles the presentation. Spend the extra space you have in the method section to actually define the formulation and method, in a way that actually conveys the novelty of the formulation you believe has merits.\n\n3. ** \"We didn\u2019t intend to provide any accompanying text for that. The qualitative evaluation was based on the plots included for the user\u2019s perusal as common in interpretability literature.\u201d.\n\nPlausibility of an explanation is not something that can be derived from perusing plots. Either remove the comment about qualitative results, or justify them more rigorously.\n\n4. ** \"We already show in the paper that removing the reconstruction loss destroys the method\u2019s utility as an interpretability scheme. \"\n\nSparsity has long been used as a way to enhance interpretability of regression models [1]. I was asking for the simple baseline, not necessarily your model without the reconstruction loss.\n\n[1] https://en.wikipedia.org/wiki/Lasso_(statistics)\n\n\n----\n\nLack of novelty is not, by itself, sufficient ground for rejection. However, I feel that the way the ideas of this paper are currently presented are suboptimal; the formulation is not explicitly presented, the model is not explicitly defined. For me, the weakness of this submission is not the decent empirical performance of the method; rather, it is that there does not seem to be much else.\n\n"}, {"title": "Response regarding deep consideration of sparsity", "comment": "Thanks for the quick response in elaborating on your concern. We really appreciate it. Let us try to clarify.\n\nQ:\n[Sparsity may not be essential.]\nFor example, several attribution methods for images are actually dense (e.g. simple input gradient and its variations). A popular method SHAP also outputs dense attribution. I am not sure why and how the authors concluded that \"sparsity is essential\".\n\nR:\nAttribution can be dense if required, however, we are not interested in the densest attribution, but rather on the most sparse attribution that still retains the prediction [1] [2] [3]. Otherwise, a trivial solution for dense attribution is just to predict the whole image to be responsible for the prediction. This is certainly correct but not useful. Therefore, we still believe that for human understanding, attributing the prediction to the smallest region possible i.e. jotting it down to the root cause is important. That is usually termed as the complexity of the explanation, and our sparsity-based framework focuses on finding the explanation with the least complexity [3].\n\n[1] Fong, R., Patrick, M. and Vedaldi, A., 2019. Understanding Deep Networks via Extremal Perturbations and Smooth Masks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2950-2958).\n\n[2] Fong, R.C. and Vedaldi, A., 2017. Interpretable explanations of black boxes by meaningful perturbation. In Proceedings of the IEEE International Conference on Computer Vision (pp. 3429-3437).\n\n[3] Ribeiro, M.T., Singh, S. and Guestrin, C., 2016, August. Why should i trust you?: Explaining the predictions of any classifier. In Proceedings of the 22nd ACM SIGKDD international conference on knowledge discovery and data mining (pp. 1135-1144). ACM.\n\nQ:\n[Sparsity can be harmful.]\nAs the authors mentioned in the reply, the selection of the baseline is a crucial issue in some of the attribution methods (including the current paper). I do not believe it is a reasonable justification saying \"because prior studies took the same approach and so do we\". If there is a potential bias, it should be clarified in the paper, and expectedly how we can avoid such a bias.\n\nR:\nHaving a baseline input is extremely important in interpretability and attention literature since it is important to denote the absence of a feature/input. Among all the alternates, zero input is the most plausible choice [1] [2] [3]. Sensors also use zeros to denote the absence of value. We are not sure how the reviewer thinks this can be improved. This itself can make up a seminal paper which can impact a lot of domains if a better solution exists, however, it\u2019s hard for us to believe so. On the other hand, we think that the reviewer is overestimating the impact that the baseline input has on the generated explanation. We just want to highlight the most discriminative regions of the input discarding the rest. There are so many things wrong with the current generation of models, therefore, we believe the selection of the baseline is among the least of the concerns.\n\nAs far as the references by the reviewer are concerned, we don\u2019t have a static output problem since our model is optimized based on the classifier, so a sanity check doesn\u2019t add any information to what is already encoded in the formulation [4]. Everything is a natural progression of the previous work in science. So we agree that the current generation of interpretability methods are not perfect, but so are the deep learning models themselves. We still have a very long way to go in that regard. We can\u2019t have a silver bullet that can solve all the problems that are there in interpretability. We highlight this fact from the results in our paper.\n\n[1] Woo, S., Park, J., Lee, J.Y. and So Kweon, I., 2018. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19).\n\n[2] Sundararajan, M., Taly, A. and Yan, Q., 2017, August. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 3319-3328). JMLR. org.\n\n[3] Wojna, Z., Gorban, A.N., Lee, D.S., Murphy, K., Yu, Q., Li, Y. and Ibarz, J., 2017, November. Attention-based extraction of structured information from street view imagery. In 2017 14th IAPR International Conference on Document Analysis and Recognition (ICDAR) (Vol. 1, pp. 844-850). IEEE.\n\n[4] Adebayo, J., Gilmer, J., Muelly, M., Goodfellow, I., Hardt, M. and Kim, B., 2018. Sanity checks for saliency maps. In Advances in Neural Information Processing Systems (pp. 9505-9515).\n"}, {"title": "deeper consideration", "comment": "I would like to thank the authors for the response.\n\nI raised two concerns on the use of sparsity, because I believe that the sparsity is not alway essential and can be harmful in some cases.\n\n[Sparsity may not be essential.]\nFor example, several attribution methods for images are actually dense (e.g. simple input gradient and its variations). A popular method SHAP also outputs dense attribution. I am not sure why and how the authors concluded that \"sparsity is essential\".\n\n[Sparsity can be harmful.]\nAs the authors mentioned in the reply, the selection of the baseline is a crucial issue in some of the attribution methods (including the current paper). I do not believe it is a reasonable justification saying \"because prior studies took the same approach and so do we\". If there is a potential bias, it should be clarified in the paper, and expectedly how we can avoid such a bias.\n\nAs I raised above, sparsity may not be always essential and it can have potential drawbacks. The current paper completely misses the discussion on the justification of the use of sparsity. This is the reason why I pointed out the paper lacks \"deeper consideration\".\n\nIntepretability methods are for helping people understand models and induce the trust of the users. If people use intepretability methods without noticing potential risks (e.g. it is biased towards zero inputs), people may be mislead by the methods. Indeed, the reliability of the interpretability methods is one of the recent concerns in the field [Ref1-5]. To not mislead people and to make interpretability methods reliable, I believe the paper should clarify both pros and cons (especially what is the potential risk), rather than just saying our method is nice.\n\n[Ref1] A Theoretical Explanation for Perplexing Behaviors of Backpropagation-based Visualizations\n[Ref2] Sanity Checks for Saliency Maps\n[Ref3] Interpretation of Neural Networks is Fragile\n[Ref4] Fairwashing: the risk of rationalization\n[Ref5] Explanations can be manipulated and geometry is to blame"}, {"title": "Response to area chair", "comment": "We are quite disappointed with the allocation of reviewers. None of the reviewers that were assigned to us is an active researcher in the area of interpretability, so we find it very unfair from the ICLR team to ask them for review with their knowledge stemming from reading a few papers in this direction. We believe ICLR should improve its review process in the future to make sure only active researchers in a particular area are asked to assess the quality of the submitted work. We understand that the number of reviewers is limited. So rather than asking anyone to review, the conference should rather impose a cap on the number of submissions in order to ensure that each submission is given a proper assessment.\n\nAll of the reviewers insisted on optimizing the model end-to-end. However, we again emphasize that TSInsight is targeted towards explaining pre-existing models that already excel at the task they are trying to perform. We just intend to explain the decisions made by these networks, rather than training a network from scratch which is itself explainable.\n\nAs far as the results are concerned, there is no silver bullet in research i.e. \u201cno free lunch theorem\u201d. Therefore, we specifically highlighted this fact from our results by including a very diverse range of datasets regardless of their performance on any particular method. This indicated that although our method worked the best on average, the problem of interpretability still has a very long way to go. This complete picture is intentionally dropped from most of the papers in order to minimize the risk of the reviewer\u2019s objection."}, {"title": "Response to Reviewer # 01", "comment": "We would like to first thank the reviewer for spending his time on the perusal of our paper.\n\nQ:\nFirst, the auto-encoder transforms the input into sparse, which can completely differ from any of the \"natural\" data, as shown in Fig1(b).\nI am not very sure whether studying the performance of the model for such an \"outlying\" input is informative.\n\nR:\nThe attribution methods have to attribute the importance to some particular locations while discarding the rest. So it is always assumed to be sparse. The sparsity that we induce is not a random one, but is very focused from the perspective of the classifier. In order to ensure that we don\u2019t deviate from the natural distribution, we introduced reconstruction loss into the picture. Therefore, we think that the problem is perfectly justified and entirely conforms to the prior work in this direction.\n\nQ:\nSecond, it seems the authors implicitly assumed that zero input is irrelevant to the output of the model.\nHowever, zero input can have a certain meaning to the model, and thus naively introducing the sparsity to the model input may bias the model's output.\n\nR:\nWe agree with the reviewer that the zero input can have certain meaning. Since there is no reliable test to quantify interpretability, this is very common in the literature [1] [2]. Therefore, this biasness exists in almost all of the methods that require a baseline input.\n\n[1] Fong, R., Patrick, M. and Vedaldi, A., 2019. Understanding Deep Networks via Extremal Perturbations and Smooth Masks. In Proceedings of the IEEE International Conference on Computer Vision (pp. 2950-2958).\n\n[2] Sundararajan, M., Taly, A. and Yan, Q., 2017, August. Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 3319-3328). JMLR. org.\n\nQ:\nI think the paper lacks deeper considerations on the use of sparsity. Thus, to me, the soundness of the proposed approach is not very clear to me.\n\nR:\nThe reviewer should point out concrete instances where information is missing or incorrect so that we can clarify the confusion. \u201cI think it lack deeper considerations\u201d is not a well-defined argument which the authors can nullify."}, {"title": "Response to Reviewer # 03", "comment": "We would like to first thank the reviewer for spending his time on the perusal of our paper.\n\nQ:\nThe method seems to suffer from a severe parameter tuning problem, which makes it hard to use in practise.\n\nR:\nWe entirely disagree with the reviewer on this since this claim is totally unbacked by the reviewer. The only reason to opt for a range of different datasets is to show that the method is generic and applicable to a wide range of different datasets. Almost all of the interpretability methods with an optimization scheme rely on hyperparameters.\n\nQ1:\nIt is unclear to me why the discriminator is fixed during training the encoder and decoder. Shouldn't it improve performance to also adapt the discriminator to the new representation.\n\nQ2:\nWhy can we not just add a layer with a sparsity constraint one layer before the \"encoded\" layer such that we have the same architecture and optimize that end to end? At least comparison to such an approach would be needed to justify something more complex. \n\nR:\nThere are two major streams of research on interpretability which we discuss in the paper. The first stream focuses on explaining the decisions of pre-trained networks while the second one focuses on making the network itself interpretable. TSInsight is particularly focused on the first steam. Therefore, TSInsight takes a pre-trained model and tries to explain the decisions made by the network using an auto-encoder with sparsity inducing norm on top of it. The classifier remains intact since we just want to explain the decisions made by the classifier rather than coming up with an architecture that is itself explainable. However, it can be easily extended for the other case as the reviewer mentioned. But it wasn\u2019t a focus of the current work and can be explored in detail in the future.\n\nQ:\nThe plots need to be better explained. While the comparisons seems to be exhaustive, it is already too many plots and it is very easy to get lost. Also, the quality of the plots need to be improved (e.g. font size)\n\nR:\nWe agree that the quality of the plots as well as the accompanying text can be improved. We will work on it to make everything clear. We also included high-resolution versions of these plots in the supplementary material. The reviewer can refer to them for now in case required."}, {"title": "Response to Reviewer # 02", "comment": "We would like to first thank the reviewer for spending his time on the perusal of our paper.\n\nQ:\nThe method itself is not explained very well. The authors use language such as \u201cattach the auto encoder to the classifier\u201d, which is a bit vague and could mean a number of things. It would be helpful if they provided either a formal definition of the model or an architectural diagram.\n\nR:\nWe tried to elaborate on what we meant by this throughout the paper. However, as the reviewer highlighted, a diagram is quite useful to avoid confusion. We have such a diagram for TSInsight which we moved to the supplementary material due to space constraints. We can adjust it back to the main text if the reviewers find it useful for the overall idea.\n\nQ:\nThough the quantitative evaluation is not entirely flattering, the authors should not be punished for providing experiments on many datasets. That said, if their contribution is then rather one of technical novelty, i.e. a sparse-autoencoder-based framework for time series interpretability, it would be helpful for them to \u2026\n\nR:\nWe spent more space than any other paper to clearly outline the previous work in this direction and how TSInsight is technically a novel solution to this problem. It is very easy to misguide the reviewer by just adding flattering cases. It is quite common in interpretability literature to selectively pick datasets where the method shines and avoid comparison against strong baselines. However, the reason for providing a detailed comparative study on a range of different datasets and almost all the commonly employed interpretability techniques is to show that although TSInsight provides the most plausible explanations on average, there is still a very big room for improvement. \n\nQ:\nThe authors provide a discussion on automatic hyper-parameter tuning that seems a bit out of place in the main method section, since it is not mentioned much thereafter and is claimed to not bring benefits.\n\nR:\nAutomated hyperparameter tuning is an important avenue for this work. However, we weren\u2019t able to obtain any interesting results through it. It is mainly intended to provide a future direction for the work.\n\nQ:\nThe qualitative evaluation made by authors is rather vague, \"Alongside the numbers, TSInsight was also able to produce the most plausible explanations\u201d.\n\nR:\nWe didn\u2019t intend to provide any accompanying text for that. The qualitative evaluation was based on the plots included for the user\u2019s perusal as common in interpretability literature.\n\nQ:\nWhy not train things jointly? Does this have to be done post-hoc? The authors state that they \u201cshould expect a drop in performance since the input distribution changes\u201d -> so why not at least try fine-tune and study the effect of training the classifier with sparse representations end-to-end? Exploring whether things can be trained jointly, or in other configurations, might allow the authors to frame their work as more of a general technical contribution.\n\nR:\nThere are two major streams of research on interpretability which we discuss in the paper. The first stream focuses on explaining the decisions of pre-trained networks while the second one focuses on making the network itself interpretable. TSInsight is particularly focused on the first steam. Therefore, TSInsight takes a pre-trained model and tries to explain the decisions made by the network using an auto-encoder with sparsity inducing norm on top of it. The classifier remains intact since we just want to explain the decisions made by the classifier rather than coming up with an architecture that is itself explainable. However, it can be easily extended for the other case as the reviewer mentioned. But it wasn\u2019t a focus of the current work and can be explored in detail in the future.\n\nQ:\nIt would be nice to have the simple baseline of a classifier with a sparsity constraint, i.e. ablate the reconstruction loss\n\nR:\nWe already show in the paper that removing the reconstruction loss destroys the method\u2019s utility as an interpretability scheme. Since the aim of this work is to achieve interpretability, it doesn\u2019t make sense to consider that as a baseline since the model won\u2019t be providing attribution information.\n\nOverall:\nWe agree that there is great room for improvement in terms of portraying the idea. However, we are quite disappointed with the reviewer\u2019s comment on the lack of novelty and unconvincing quantitative evaluation. We compare against all the recent works in this direction and show that our method is the best on average. However, we would like to make it clear that there is no one method in the literature until this point that can provide reasonable explanations on any provided dataset. Since interpretability is itself a very tough domain to verify, we believe that our work provides one of the best coverage regarding the different attribution methods and their performance on a wide range of different datasets.\n"}], "comment_replyto": ["SJg4jGUisr", "SyxBHBPssH", "HyxTaX8jsS", "B1gzLaNYvr", "HkgYPG-3YH", "SJl2U01TFH", "H1lLPJJ0KS"], "comment_url": ["https://openreview.net/forum?id=B1gzLaNYvr¬eId=r1xe7HI3oB", "https://openreview.net/forum?id=B1gzLaNYvr¬eId=r1gsCFajoH", "https://openreview.net/forum?id=B1gzLaNYvr¬eId=SyxBHBPssH", "https://openreview.net/forum?id=B1gzLaNYvr¬eId=rkgPcVUoir", "https://openreview.net/forum?id=B1gzLaNYvr¬eId=HyxTaX8jsS", "https://openreview.net/forum?id=B1gzLaNYvr¬eId=ryg2UQIssr", "https://openreview.net/forum?id=B1gzLaNYvr¬eId=SJg4jGUisr"], "meta_review_cdate": 1576798699529, "meta_review_tcdate": 1576798699529, "meta_review_tmdate": 1576800936328, "meta_review_ddate ": null, "meta_review_title": "Paper Decision", "meta_review_metareview": "Main content:\n\nBlind review #2 summarizes it well:\n\nThe aim of this work is to improve interpretability in time series prediction. To do so, they propose to use a relatively post-hoc procedure which learns a sparse representation informed by gradients of the prediction objective under a trained model. In particular, given a trained next-step classifier, they propose to train a sparse autoencoder with a combined objective of reconstruction and classification performance (while keeping the classifier fixed), so as to expose which features are useful for time series prediction. Sparsity, and sparse auto-encoders, have been widely used for the end of interpretability. In this sense, the crux of the approach is very well motivated by the literature.\n\n--\n\nDiscussion:\n\nAll reviews had difficulties understanding the significance and novelty, which appears to have in large part arisen from the original submission not having sufficiently contextualized the motivation and strengths of the approach (especially for readers not already specialized in this exact subarea).\n\n--\n\nRecommendation and justification:\n\nThe reviews are uniformly low, probably due to the above factors, and while the authors' revisions during the rebuttal period have improved the objections, there are so many strong submissions that it would be difficult to justify override the very low reviewer scores.", "meta_review_readers": ["everyone"], "meta_review_writers": ["ICLR.cc/2020/Conference/Program_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=B1gzLaNYvr¬eId=l2jFTePF3g"], "decision": "Reject"}