{"forum": "BJxTPziyeE", "submission_url": "https://openreview.net/forum?id=BJxTPziyeE", "submission_content": {"title": "Neural Processes Mixed-Effect Models for Deep Normative Modeling of Clinical Neuroimaging Data", "authors": ["Seyed Mostafa Kia", "Andre F. Marquand"], "authorids": ["s.kia@donders.ru.nl", "a.marquand@donders.ru.nl"], "keywords": ["Neural Processes", "Mixed-Effect Modeling", "Deep Learning", "Clinical Neuroimaging"], "TL;DR": "An application of neural processes in precision psychiatry.", "abstract": "Normative modeling has recently been introduced as a promising approach for modeling variation of neuroimaging measures across individuals in order to derive biomarkers of psychiatric disorders. Current implementations rely on Gaussian process regression, which provides coherent estimates of uncertainty needed for the method but also suffers from drawbacks including poor scaling to large datasets and a reliance on fixed parametric kernels. In this paper, we propose a deep normative modeling framework based on neural processes (NPs) to solve these problems. To achieve this, we define a stochastic process formulation for mixed-effect models and show how NPs can be adopted for spatially structured mixed-effect modeling of neuroimaging data. This enables us to learn optimal feature representations and covariance structure for the random-effect and noise via global latent variables. In this scheme, predictive uncertainty can be approximated by sampling from the distribution of these global latent variables. On a publicly available clinical fMRI dataset, we compare the novelty detection performance of multivariate normative models estimated by the proposed NP approach to a baseline multi-task Gaussian process regression approach and show substantial improvements for certain diagnostic problems.", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "pdf": "/pdf/a1684ec1e613130e9ee3d1f7bc85bdcd1e1903fe.pdf", "paperhash": "kia|neural_processes_mixedeffect_models_for_deep_normative_modeling_of_clinical_neuroimaging_data", "_bibtex": "@inproceedings{kia:MIDLFull2019a,\ntitle={Neural Processes Mixed-Effect Models for Deep Normative Modeling of Clinical Neuroimaging Data},\nauthor={Kia, Seyed Mostafa and Marquand, Andre F.},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=BJxTPziyeE},\nabstract={Normative modeling has recently been introduced as a promising approach for modeling variation of neuroimaging measures across individuals in order to derive biomarkers of psychiatric disorders. Current implementations rely on Gaussian process regression, which provides coherent estimates of uncertainty needed for the method but also suffers from drawbacks including poor scaling to large datasets and a reliance on fixed parametric kernels. In this paper, we propose a deep normative modeling framework based on neural processes (NPs) to solve these problems. To achieve this, we define a stochastic process formulation for mixed-effect models and show how NPs can be adopted for spatially structured mixed-effect modeling of neuroimaging data. This enables us to learn optimal feature representations and covariance structure for the random-effect and noise via global latent variables. In this scheme, predictive uncertainty can be approximated by sampling from the distribution of these global latent variables. On a publicly available clinical fMRI dataset, we compare the novelty detection performance of multivariate normative models estimated by the proposed NP approach to a baseline multi-task Gaussian process regression approach and show substantial improvements for certain diagnostic problems.},\n}"}, "submission_cdate": 1544692325110, "submission_tcdate": 1544692325110, "submission_tmdate": 1561399861220, "submission_ddate": null, "review_id": ["rJeZ3x4TQV", "SklSq6eIXE", "HygasK_pQ4"], "review_url": ["https://openreview.net/forum?id=BJxTPziyeE¬eId=rJeZ3x4TQV", "https://openreview.net/forum?id=BJxTPziyeE¬eId=SklSq6eIXE", "https://openreview.net/forum?id=BJxTPziyeE¬eId=HygasK_pQ4"], "review_cdate": [1548726440568, 1548254604640, 1548745125166], "review_tcdate": [1548726440568, 1548254604640, 1548745125166], "review_tmdate": [1549296744502, 1548856721115, 1548856685453], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper40/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper40/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper40/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["BJxTPziyeE", "BJxTPziyeE", "BJxTPziyeE"], "review_content": [{"pros": "The work proposes a deep normative modeling framework based on Neural Processes to model variation of neuroimaging measures across individuals, with the goal of deriving biomarkers of psychiatric disorders. The proposal is an alternative to the use of Gaussian processes, that are very computationally expensive and rely on parametric kernels.\n\n- The work seems very interesting and with potential for clinical data.\n- The network architecture is explained well and related to the mathematical modelling (section 2.3).\n- Different diseases are tested for the novelty detection.\n- Results seem to be much better when dealing with ADHD data, and they even localize the region responsible for it in Figure 4. It would be interesting to see the results of figure 4 with the sMT-GPTR method too.\n- The data and the codes are publicly available.", "cons": "- The paper could benefit from a clearer writing and lighter explanations. \n- Sections 2.1 and 2.2 are difficult to follow.\n- One of the advantages mentioned for the use of NPs over GPs is the computational tractability. How do they compare in terms of computational cost? How does this cost change when you change the M? \n- How the data analysis is done is not clear to me. The number of subjects used for the training is very low compared to the amount you have. Why not using more? This should be clarified in the paper.\n", "rating": "3: accept", "confidence": "1: The reviewer's evaluation is an educated guess"}, {"pros": "0) Summary\n The manuscript proposes to use neural processes for normative modeling of fMRI data in order to perform classification of healthy, schizophrenia, attention deficit hyperactivity disorder and bipolar disorder.\n1) Quality\n The manuscript presents a conceptually simple idea as it proposes to replace a forward regression model by a different one.\n2) Clarity\n The paper is mostly well written and the notation is clear.\n3) Originality\n The use of neural processes for normative modeling of fMRI data seems to have not been explored before.\n4) Significance\n The model could potentially be a valuable tool.\n5) Reproducibility\n As the experimental evaluation is based on an public dataset available from OpenNEURO and there is some code available for the neural process, the results should be reproducible in principle.", "cons": "1) Quality\n One strange thing is the numbers reported in Figure 3 as compared to Figure 1 in [1]. Why does the sMT-GPTR(n,m) class perform much better here? Values of AUC=0.8 on SCHZ, AUC=0.7 for ADHD and AUC=0.85 for sMT-GPTR(5,3) are way better than the values reported in the manuscript. Also sMT-GPTR(5,3) does better than sMT-GPTR(10,5) there, which is not the case in the manuscript. Please explain!\n The relative merit of using a neural process versus a Gaussian process (GP) remains only partly explored as the claim that GPs scale poorly is not substantiated by runtime experiments. In particular given prior work on scaling up GPs e.g. [2].\n The manuscript does not discuss whether or not the proposed methodology reveals interesting/relevant spatial pattern.\n2) Clarity\n There are a couple of typos.\n - Section 2.2: \"In our application, in order to\"\n - Section 2.2: \"parametrized on an encoder\"\n - Section 2.2: \"In fact, in this setting\"\n - Section 2.3, Normative modeling: \"let $\\mathcal{Y}^* ...$ to represent\"\n - Figure 2, caption: \"3d-covolution\" -> twice\n - Section 5: \"dropout technique in order to\"\n What is an \"amortized variational inference regime\" as mentioned in Section 2.2?\n The description of the stochastic process formalism in Section 2.2 seems like an overkill here. In particular, the statement about the number of subjects N going to infinity needs to be translated into the setting of N=250 where the actual model operates.\n3) Originality\n4) Significance\n5) Reproducibility\n The code for the comparison of Figure 3 i.e. columns 1+2 is not made public.\n\n[1] Kia et al., Scalable Multi-Task Gaussian Process Tensor Regression for Normative Modeling of Structured Variation in Neuroimaging Data, https://arxiv.org/abs/1808.00036\n[2] Kia et al., Normative Modeling of Neuroimaging Data using Scalable Multi-Task Gaussian Processes, https://arxiv.org/abs/1806.01047", "rating": "2: reject", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "\t- The paper introduces some interesting ideas of combining deep models (often using in medical image analysis nowadays) with GPs in an application area that can use more focus.\n - The clinical applicability and joint modelling between different domains (even just tackling this area is a plus) is nice\n\t- The mathematical development, although very dense (see below), is mostly well written and well defined\n\t- I think that addressing the comments below (and perhaps others reviews'), the paper can be presented at a future conference.\n\t- the figures and architecture are fairly clear, and most of the prose text is well written.\n", "cons": "\t- The paper essentially builds on two frameworks - Normative models (the authors' previous work) and Neural Processes (2018). The authors do not really spend time giving an overview of these models, while neither of them are widely known that they should assume the reader is familiar with them. It makes for a very difficult read. I tried to learn more by looking at the previous papers for these two frameworks for purposes of this review, but these should be summarized in the current paper\n\t- The mathematical development is dense and perhaps unnecessarily generalized (e.g. 2.2 development) -- while generality is certainly nice, in terms of a fit with MIDL it feels like some more intuition could have been developed alongside the technical development, and a more clear focus on a finite neuroimaging dataset. \n\t- It seems like the authors build on NPs that seem to have overlap with latent variable models (VAEs, etc), but there are not described or cited. It seems like an entire field is omitted, at least from discussion. \n\t- Building on the previous models, there are several works on deep latent models with GP priors that seem relevant, eg. Tran ICLR 2016, Casale NeurIPS 2018 (earlier on arxiv), and several others. Some of these are recent, so they shouldn't preclude the authors presenting this work, but some citations and discussion should be included, especially because the technical contribution seems important to the authors (rather than extreme/novel results)\n\t- There are several concepts introduced without clear explanation. Novelty detection (in this setting), GEVD, etc are all introduced and important in the results but not really well described.\n\t- The results are unfortunately not sufficiently convincing, with comparable behaviour to the authors' previous work. This is okay if we gain some new insight through a new method, but due to the aspects mentioned above, this is hard to obtain in this particular paper.\n", "rating": "2: reject", "confidence": "1: The reviewer's evaluation is an educated guess"}], "comment_id": ["SJxYGUT-E4", "Skx_Qpor4E", "rJg_cgkLNE", "S1eJiHNjNE", "BkxFQOzJ4E", "S1eSQJLe4E"], "comment_cdate": [1549026832938, 1549282591804, 1549295759863, 1549645206926, 1548851233361, 1548930844617], "comment_tcdate": [1549026832938, 1549282591804, 1549295759863, 1549645206926, 1548851233361, 1548930844617], "comment_tmdate": [1555946045257, 1555946036089, 1555946034105, 1555946003584, 1555945958713, 1555945958460], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper40/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper40/Area_Chair1", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper40/AnonReviewer3", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper40/AnonReviewer2", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper40/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper40/Authors", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "given the reviewer\u2019s positive evaluation of our contribution, the final decision of \u2018reject\u2019, due to really minor comments (or better to say \u2018opinions\u2019), seems extremely harsh and unfair.", "comment": "We should first appreciate the reviewer\u2019s interest in our work and her/his positive comments on applicability, potential, the significance of results, and reproducibility of our contribution.\nThe first reviewer concern is on the complexity of the presented approach that makes understanding of the paper cumbersome. As mentioned in response to reviewer 1 and 2, given the referred references in the text and the explanations in the section 2.1 and 2.2, and of course investing \u2018enough\u2019 time to understanding the related literature, we strongly believe that the presented text is clear enough for audience with basic knowledge of mathematics who have some hands-on variational inference in the deep learning context. We expect the same range of audience in MIDL. However, to address the reviewer concern, we will include some background information in the supplementary materials.\nThe reviewer is asking for an extra experiment on the computational cost of NP. As mentioned in the response to reviewer 1, this is an interesting experiment but, 1) it is out of the scope of this paper; 2) the employed experimental data are not appropriate to answer this question.\nThe reviewer main concern is about the employed evaluation (cross-validation) scheme. One of the reviewer suggestions is a leave-a-subset-out scheme which is actually used in our experiment. As mentioned in the experimental setup section, we randomly select a subset of subjects for training and we repeat the random selection, modeling, and evaluation 10 times. We will even clarify this more in the text to avoid any further ambiguity. Please consider that fact that due to the adopted novelty detection setting in normative modeling framework (we must train a model on a majority of healthy subjects and predict on both healthy and patients) using leave-one-out and stratified shuffle split is out of the question. Add to this the high variance of classifiers that are trained using LOO (Varoquaux et al. 2017).\nIn short, given the reviewer\u2019s positive evaluation of our contribution, the final decision of \u2018reject\u2019, due to really minor comments (or better to say \u2018opinions\u2019), seems extremely harsh and unfair. \n"}, {"title": "Regarding your concerns", "comment": "Dear authors,\n\nThank you for expressing your concerns. I have asked from the reviewers to take into serious consideration your comments. Either during the rebuttal or the decision-making process, they will express how well your responses have addressed their concerns, adjust their scores accordingly and justify the appropriateness of the scores for the final decision.\n\nI would like to ensure you that the reviewers are experienced and responsible scientists. All of us, reviewers, area and program chairs are performing our best to ensure the highest possible quality for the reviewing process and the quality of the conference.\n\nSincerely,\nMIDL Area Chair"}, {"title": "the authors comments clarified my main concerns", "comment": "Thank you for replying to my comments. \n\nMy main concern was the training of the model, which was done on a subset of subjects (75 healthy, 5 SCHZ, 5 ADHD, 5 BIPL), a very small subset of the original data (119 healthy, 49 SCHZ, 39 ADHD, 48 BIPL). If there is a reason for using this disease subject proportion in the training, which seems to be the case, then it should be clarified in the paper.\n\nAbout sections 2.1 and 2.2 of the paper, I would recommend to make the writing more accessible to people that are not experts in that specific area of knowledge. \n\nGiven the authors clarifications, I am going to recommend the acceptance of the paper."}, {"title": "No Title", "comment": "I am quite confused by the authors summary of the novelty of the paper being turned against it. I think the authors misunderstood the review as aggressive, whereas I actually was excited to read this paper but unfortunately found several issues. It is not the novelty of the paper that led to my decision, it is the confusing presentation and development, and the results (see below). \n\nWhat I am mostly worried about, as in part the authors admit, is the presentation. I think presentation in such a complex method that's also slightly on the fringes of MIDL is very important. Letting papers through because we did not understand them but they *seem* good in general leads to many bad quality work going through. As reviewers, we need to understand work to accept it, I think. I am not saying the current paper/method *is* bad, I am saying it's hard to evaluate it without clear presentation. Please note the fact that each of the other reviewers also had issues with various aspects of clarity of the paper, indicating a consistent concern about this issue. I appreciate the authors' comment that they will update the text, but unfortunately, one of the main repeated responses from the authors is that they could not include enough explanation in the paper original because of the word/page limit. However, there is no word/page limit for MIDL -- in fact, the very reason there is no page limit is so that papers like this one, that need more up-front overview have the necessary space to do it. \n\nAnother aspect about the presentation that worried me is the lack to discussion with relevant (not similar) models, like the VAE family and the other GP work (see citations above), the latter of which I think are actually quite relevant (and one reason why I was excited to read this paper). The authors omit answering about GPs completely. Please also note that while VAEs are not directly applicable, they still provide foundational work relevant to NPs (if I understand them correctly) -- certainly VAEs combined with GPs (see again citations above) seem relevant conceptually (perhaps not in the actual implementation). If I am wrong here, might the authors explain why? \n\nPerhaps I misunderstood the results (because by this point in the paper I was confused having not understood several connections) -- could the authors comment on the following: isn't the result comparing the current contribution with a an application of the previous method? That is, what is sMT-GPTR (it cites Kia et al., 2018), perhaps I misunderstood this (e.g. looking at Fig 3). As I mentioned, even in the results some parts are not well introduced (e.g. GEVD), so this probably added to my confusion.\n\nI am happy to change my evaluation to a borderline (there is no such setting, but I trust the AC can take this into consideration), but I am worried the authors are not gaining too much from this experience, but instead think the reviewers (e.g. myself) are maliciously against their paper -- but this isn't the case. I spent the most of all my reviews on this paper. I strongly believe that a much more clear paper (even if they have to wait an extra 2 months) would benefit the authors much more in the future. If the presentation is not thoroughly addressed, I worry that this paper will likely be hard to read and understand by other readers, which will make it much more likely to be overlooked and not be built upon."}, {"title": "We are very surprised that the reviewer has rejected this paper given the positive comments that it is \u201cmostly well written\u201d and that it could be a \u201cvaluable tool\u201d and that the negative comments are all minor and trivial to address", "comment": "The main reviewer's concern regards the results in Fig. 3 in comparison with Fig. 1 in [1]. This is only a misunderstanding as the results reported in [1] are coming from completely different data. In [1], \u201cgeneral health questionnaire\u201d are used as covariates to regress the main task effect for \u201ctask switching\u201d task, while in this work, we have used factors of Barratt impulsiveness scores to regress the fMRI data for \u201cstop-signal\u201d task. These choices are more relevant for clinical purposes as impulsiveness is tightly connected to many psychiatric disorders, and it is shown that the stop-signal task is an effective experimental paradigm to measure impulsivity of subjects. We will clarify this in the text. About \u201cwhy sMT-GPTR(10,5) does better than sMT-GPTR(5,3) in comparison with [1]?\u201d the answer is the same, the experimental data are 100% different in the two studies The number of basis functions for signal and noise in sMT-GPTR approach are hyperparameters and the optimal values could be different from one data to another (like C in SVM).\nThe reviewer requests extra experiments to validate the NP\u2019s computational efficiency. The computational complexity of efficient sMT-GPTR that is cubic with the number of samples N (see [1]), thus, from the theoretical point of view NP is much faster that sMT-GPTR and any other GP variants when N>T. This is because it uses the variational inference scheme that does not require computing the inverse covariance matrix. We opt to not include the computational complexity experiment in this text mainly due to 2 reasons: 1) our main goal in this paper is not to show the computational efficiency of NP over GP, but here our main goal is to redefine a very well-established mixed-effect modeling approach in neuroimaging in a newly introduced NP framework and further demonstrate its possible application in normative modeling. 2) The experimental dataset with N<T (T is dimensionality of feature space), is not sufficiently discussed. The text both uses it as motivation (Sec.1) and discusses it as a strong point (Sec.6). The problem is that within the context of the work, N << T. Response of reviewers was not sufficient (especially arguing it is \u201cout of scope\u201d). As this is one of the two main motivations for using NPs over GPs, the text should definitely discuss it. The point should be addressed by extending Sec.6 with a few sentences, clarifying the conditions were these gains are expected, and, even if not on this dataset, then mention applications it is expected.\n\n- R1 & R2 raised the point that defining mixed-effects models as stochastic processes may be *unnecessary*, overcomplicating Sec 2.2. Authors replied it is, so that using NPs makes sense. AC sees how this can be confusing for the readers, since GPs (stochastic processes) have been previously used to model mixed-effect models (including in authors\u2019 work [Kia et al, arxiv \u201818]), thus one could assume 1st half of Sec2 for granted and instead focus on the 2nd half of Sec 2 that introduces NPs instead of GPs. The authors should clarify in text what this derivation adds in comparison to previous works that already modelled it with GPs (they never formalised it?), which would help avoid confusion.\n\nAdditional major point by AC\u2019s Meta-Review that needs addressing:\n\nTo approximate a stochastic process, here mixed-effects, the network *architecture* used for NPs is required to *also* fulfil the properties of exchangeability and consistency. In [Garnello 2018b] exchangeability is assured by the use of \u201caggregator\u201d in the model, adding up features from M samples (addition is invariant to permutations). This is not satisfied in the current architecture, where M samples are given to different channels (I guess it still works because when shown different permutations, the model learns to be invariant, but it\u2019s a major design difference and should be discussed). Additionally, consistency in the original work is done by training with *varying M* during a training session (aggregator enables this). The current work only trains a model with a constant M, thus does not have this necessary property. This is only visible in the code currently, not text. Define NP(M) in Sec 3 and state that it\u2019s trained with constant M, opposite to original work. The authors *must* make the above differences explicit in the main text, as they are very big differences in comparison to the original model. Future works should address them and compare design choices.\n\nThe decision on this work is difficult. The reviewers have collectively acknowledged the work as novel, interesting and potentially useful. At the same time, it is clear that the work loses value by being insufficiently accessible and unclear about important points of the methodology. It seems most points raised by the two reviewers that recommended rejection are addressable by appropriate text alterations, which the authors have committed to perform. Provided that all points I emphasized above are well addressed by the authors, I think the work will be of sufficient quality for publication.\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=BJxTPziyeE¬eId=rkg76GUBI4"], "decision": "Accept"}