File size: 28,151 Bytes
fad35ef
1
{"forum": "HJeZW_QxxN", "submission_url": "https://openreview.net/forum?id=HJeZW_QxxN", "submission_content": {"title": "Learning joint lesion and tissue segmentation from task-specific hetero-modal data sets", "authors": ["Reuben Dorent", "Wenqi Li", "Jinendra Ekanayake", "Sebastien Ourselin", "Tom Vercauteren"], "authorids": ["reuben.dorent@kcl.ac.uk", "wenqi.li@kcl.ac.uk", "j.ekanayake@ucl.ac.uk", "sebastien.ourselin@kcl.ac.uk", "tom.vercauteren@kcl.ac.uk"], "keywords": ["joint learning", "lesion segmentation", "tissue segmentation", "hetero-modality", "weakly-supervision"], "abstract": "Brain tissue segmentation from multimodal MRI is a key building block of many neuroscience analysis pipelines. It could also play an important role in many clinical imaging scenarios.\nEstablished tissue segmentation approaches have however not been developed to cope with large anatomical changes resulting from pathology. The effect of the presence of brain lesions, for example, on their performance is thus currently uncontrolled and practically unpredictable. Contrastingly, with the advent of deep neural networks (DNNs), segmentation of brain lesions has matured significantly and is achieving performance levels making it of interest for clinical use. However, few existing approaches allow for jointly segmenting normal tissue and brain lesions. Developing a DNN for such joint task is currently hampered by the fact that annotated datasets typically address only one specific task and rely on a task-specific hetero-modal imaging protocol. In this work, we propose a novel approach to build a joint tissue and lesion segmentation model from task-specific hetero-modal and partially annotated datasets. Starting from a variational formulation of the joint problem, we show how the expected risk can be decomposed and optimised empirically. We exploit an upper-bound of the risk to deal with missing imaging modalities. For each task, our approach reaches comparable performance than task-specific and fully-supervised models.", "pdf": "/pdf/6ed38e6a50764f52c559c6de1d6c3606452ecc51.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "dorent|learning_joint_lesion_and_tissue_segmentation_from_taskspecific_heteromodal_data_sets", "_bibtex": "@inproceedings{dorent:MIDLFull2019a,\ntitle={Learning joint lesion and tissue segmentation from task-specific hetero-modal data sets},\nauthor={Dorent, Reuben and Li, Wenqi and Ekanayake, Jinendra and Ourselin, Sebastien and Vercauteren, Tom},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=HJeZW_QxxN},\nabstract={Brain tissue segmentation from multimodal MRI is a key building block of many neuroscience analysis pipelines. It could also play an important role in many clinical imaging scenarios.\nEstablished tissue segmentation approaches have however not been developed to cope with large anatomical changes resulting from pathology. The effect of the presence of brain lesions, for example, on their performance is thus currently uncontrolled and practically unpredictable. Contrastingly, with the advent of deep neural networks (DNNs), segmentation of brain lesions has matured significantly and is achieving performance levels making it of interest for clinical use. However, few existing approaches allow for jointly segmenting normal tissue and brain lesions. Developing a DNN for such joint task is currently hampered by the fact that annotated datasets typically address only one specific task and rely on a task-specific hetero-modal imaging protocol. In this work, we propose a novel approach to build a joint tissue and lesion segmentation model from task-specific hetero-modal and partially annotated datasets. Starting from a variational formulation of the joint problem, we show how the expected risk can be decomposed and optimised empirically. We exploit an upper-bound of the risk to deal with missing imaging modalities. For each task, our approach reaches comparable performance than task-specific and fully-supervised models.},\n}"}, "submission_cdate": 1544726521463, "submission_tcdate": 1544726521463, "submission_tmdate": 1561396976380, "submission_ddate": null, "review_id": ["rklZHs5wmE", "H1ebewsnmV", "BkeJrnHaQV"], "review_url": ["https://openreview.net/forum?id=HJeZW_QxxN&noteId=rklZHs5wmE", "https://openreview.net/forum?id=HJeZW_QxxN&noteId=H1ebewsnmV", "https://openreview.net/forum?id=HJeZW_QxxN&noteId=BkeJrnHaQV"], "review_cdate": [1548360505256, 1548691176705, 1548733494881], "review_tcdate": [1548360505256, 1548691176705, 1548733494881], "review_tmdate": [1548856726252, 1548856701615, 1548856687197], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper87/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper87/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper87/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJeZW_QxxN", "HJeZW_QxxN", "HJeZW_QxxN"], "review_content": [{"pros": "- The authors propose a novel method to jointly perform the segmentation of six brain tissue classes and white matter lesions employing the hetero-modal MRI volumes available in different datasets. The proposal allows to combine datasets composed only by labelled T1 scans (usually related with subjects without anatomic lesions) together with datasets composed by T1 and FLAIR acquisitions (related with subjects with brain injuries) in which only the lesions are labeled. \n\n- Due to the nature of the clinical datasets, this kind of approach must be welcome. The need is clear and is becoming a hot topic in this field. In fact, another similar approach for the same problem has been presented for MIDL2019 \u2192 https://openreview.net/forum?id=Syest0rxlN \n\n- As the authors remark, they elegantly cope with a real problem in which three branches of machine learning: Multi-Task Learning, Domain Adaptation and Weakly Supervised Learning meet. \n\n- The model is tested with T1 and FLAIR volumes but should already work with more modalities. \n\n\n", "cons": "- As I mentioned above, a closely related paper have been submitted for the MIDL. I truly believe that the authors should compare themselves against https://openreview.net/forum?id=Syest0rxlN denoting advatages and disadvantages. This could be really helpful in order to help the chairs to make a decision. \n\n- The sections 2.4 and 2.5 are halfway between proper mathematical justification of the employed tools and the purpose of using them, which sometimes makes the text difficult to understand (even taking into account that the concepts are not the simplest). Due to the recommended conference page limit, simpler sentences along with the maths could help with this issue.\n\n- The work employs the statistical formulation for the loss definition, cite the beautiful Kendall & Gal and Bragman jobs but at the end, employs the mode of the distribution as predictor. Could the authors go all the way and provide (in a near future) a whole probabilistic solution?. Besides, in my personal opinion, the method would be better understand it employing this kind of formulation.\n\n- The evaluation is OK but it would be more complete by adding a comparison (where is possible) with the traditional approaches (SPM, FSL, etc.) and specially, providing any measure of how the results are distributed (standard deviation, boxplot, etc.)\n\n- Could the authors comment on the possible effects of including more modalities?\n\n- The work is nice, could be talk material (and for sure will be part of Medical Imaging Analysis or a similar journal soon) but the authors have employed 11 pages. The text contains some unnecessary blank spaces, overdimensioned tables and figures which could be structured much more efficiently in order to save space. Summarize an interesting work is always difficult but is must be done in order to ease the work for the scientific community. For this reason, I cannot propose the paper for a talk.", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "- The problem is highly significant\n\n- The paper is well written\n\n- Great contribution to the field of multi task learning. Mathematically grounded an elegant", "cons": "- As recognized by the authors, the Dice metric is sensitive to the size of the structures evaluated. It was maybe not the most appropriate\n\n- The authors should consider evaluating the number of detected lesions  (together with positive predictive/negative value). While this seems \"much easier\" than the full extend of lesions, this is already a very useful information for clinical applications.\n\n- Why using a different class for brain stem? In some pathologies physicians are looking for brain stem lesions. Can lesions be in two different \"tissue\" classes?", "rating": "4: strong accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The paper presents a model for the joint learning of two segmentation tasks (brain tissue and lesion) from different datasets. The model uses an average operation to deal with the different number of input modalities. An upper bound on the expected loss for segmenting tissue is derived, which allows transferring information across tasks (i.e., from lesion to tissue segmentation), during training. Experiments on three datasets show the proposed model to offer comparable performance for both tasks, compared to task-specific models. \n\npros:\n\n- Original and principled approach to deal with tasks for which input modalities may differ.\n\n- The proposed approach is motivated by a sound mathematical framework.\n\n- Experimental evaluation on three separate datasets.   ", "cons": "cons:\n\n- Results are not so convincing. The multi-task network performs significantly worse than single-task models, for both tissue and lesion segmentation. Table 2 shows improvements, however these are misleading since models were trained using different datasets (and MRBrains has only 7 training subjects). Given these results, it would be beneficial to clarify the benefits of the proposed model, compared to running single-tasks models separately. If the main advantage is runtime, than experimental results should be added to support this.\n\nOther comments: \n\n\n- While interesting the derivation of the upper-bound on R^t and its estimation is a bit long. In particular, going from Eq (5) to (7) is rather straightforward and may not deserve such length in the paper. I would have preferred this space used for a deeper experimental validation.  \n\n- The average operation in the network allows dealing with a variable number of input modalities. However, it is unclear how this affects the information from different inputs. More specifically, I wonder if this forces the network to learn a \"common representation\" for T1 and FLAIR, which would make it less sensitive to when either one of these modalities is missing. How would the model perform if trained for a single task (lesion), with instances which can have missing modalities? Perhaps authors could comment on this in their paper. \n\n- Subsection \"Joint model versus fully-supervised model\" and Table 2 are hard to understand. It should be made clearer that the FS model is trained on MRBrainS18, whereas the proposed model is trained on WMH and Neuromorphics (ideally, this should be mentioned in the caption of Table 2).   \n\n- p.9 : \"shwown\" --> \"shown\"; Figure 3-a --> Figure 4-a ?", "rating": "3: accept", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature", "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["BygW5q_a4E", "SkxPeRdTE4", "BkljtyKT4E", "S1gVxwZAV4"], "comment_cdate": [1549793929483, 1549794798845, 1549795202599, 1549829867613], "comment_tcdate": [1549793929483, 1549794798845, 1549795202599, 1549829867613], "comment_tmdate": [1555945986695, 1555945986440, 1555945986224, 1555945979362], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper87/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper87/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper87/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper87/AnonReviewer1", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Clarifications + Improved performance after consistent handling of resampling", "comment": "We would like to thank the reviewer for their comments. Below is a point-by-point answer with original citations.\n\n>Quality of the results (\u201cmulti-task network performs significantly worse than single-task models\u201d)\nIn our initial submission, an improper handling of resampling was performed when dealing with the training lesion data. We have now corrected this issue, leading to significantly improving our results on MRBrainS18. With the same method as originally described but by consistently resampling all the images to 1mm x 1mm x 3mm, we maintain the tissue segmentation performance (overall mean dice score: 91.5% for our updated results, 90.7% in the initial submission) but achieve better lesion segmentation performance (dice score: 53.7% for our updated results, 37.2% in the initial submission). Consequently, our joint model reaches similar performance to the fully-supervised model on classes that have been annotated with a consistent protocol. All the updated results are shown in the Table1. We believe that these results are now more convincing.\n\n\n\t\t\t\t    Neuromorphometrics||            WMH         ||MRBrainS18\n\t\t\t\t\t|   N   |  M   | W+M ||   W  |   M   |W+M||   M   | W+M\nGray matter\t\t\t| 88.5 | 42.0 | 89.4 ||\t       |\t   |         || 83.3 | 79.4\nWhite matter\t\t| 92.4 | 56.7 | 92.8 ||\t       |\t   |         || 85.9 | 85.4\nBrainstem\t\t\t| 93.4 | 20.0 | 93.1 ||\t       |\t   |         || 92.3 | 72.3\nBasal Ganglia\t\t| 86.7 | 41.2 | 87.2 ||\t       |\t   |         || 79.1 | 75.3\nVentricles\t\t\t| 90.7 | 24.5 | 91.6 ||         |\t   |         || 91.0 | 91.7\nCerebellum\t\t\t| 92.5 | 43.7 | 94.9 ||         |\t   |         || 91.8 | 90.8\nWhite matter lesion \t|          |\t\t|\t  || 61.9 | 50.6 | 59.9 || 53.5 | 53.7\n\nTable1: Comparison between the lesion segmentation model (W), the tissue segmentation\nModel (N), the fully-supervised model (M), and our model (W+N). The Dice Similarity Coefficient (%) is presented.\n\n\n>Tasks interdependence (\u201cclarify the benefits of the proposed model\u201d)\nDeveloping a joint learning model is motivated by exploiting the interdependence between lesion and tissue segmentation tasks. Well-posed joint models also make principled extensions of the methodology easier in comparison to composing pipelines of consecutive single-task models. This should lead to improving the performance of the related single tasks and would allow us to provide a prediction of the uncertainty for the joint problem.\n\nFirstly, multi-task models often outperform single-task models. Due to the interdependence between the tasks, sharing a common visual representation should be beneficial for extracting more robust and accurate features. This could explain why our method achieves better performances to the single tissue segmentation model on NeuroMorphometrics (overall mean dice score: 91.5% for our updated results, 90.7% for the single tissue segmentation model). In addition to this, learning from multiple scans sources often improves generalization performance. Our method of training a joint model from heterogeneous data for multiple tasks can be seen as a weakly-supervised domain adaptation.\n\nSecondly, as remarked by AnonReviewer1, reporting uncertainty on classes is particularly relevant in medical imaging and will be integrated in our framework in a future work. This uncertainty comes from the model parameters uncertainty and the data uncertainty (noise inherent in the observations) [1]. Since the two tasks are dependent, the measurement of the uncertainty on the full problem can only be performed using a joint model.\n\n\n>Summarization (\u201c the derivation of the upper-bound on R^t and its estimation is a bit long\u201d)\nWe took into consideration the reviewer remark and shorten this part (see also our answer to AnonReviewer1).\n\n>Common feature space (\u201cI wonder if this forces the network to learn a common representation for T1 and FLAIR\u201d)\nDesigning a model which is robust to missing modalities is a particularly challenging task. In this work, we adopt a state-of-the-art architecture for doing so. Additionally, achieving a proper common representation is a more subtle question than \u201cjust\u201d dealing with missing modalities, and our work does not directly try to solve it. Some papers studied this problem in more details: a random number of modality is used during training in HeMIS [2]; the PIMMS framework [3] creates intermediate representations prior to the HeMIS network; and a shared modality-invariant latent space is learned for MR synthesis [4]. Exploring the most suitable network architecture for common feature extraction within our joint learning framework is left for future work. \n\n\n[1] What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision? Kendall, et al. 2017.\n[2] HeMIS: Hetero-Modal Image Segmentation. Havaei, et al. 2016. \n[3] PIMMS: Permutation Invariant Multi-modal Segmentation. Varsavsky, et al. 2018. \nMultimodal MR Synthesis via Modality-Invariant Latent Representation. Chartsias, et al. 2018."}, {"title": "An evaluation strategy imposed by the MRBrainS18 challenge ", "comment": "We would like to thank the reviewer for their comments. Below is a point-by-point answer with original text quoted in italics.\n\n\n>Choice of the metric (\u201cDice metric is sensitive to the size of the structures\u201d, \u201cconsider evaluating the number of detected lesions\u201d)\nMetrics such as the Relative Volume Error or the error number of detected lesions are indeed particularly interesting clinically, however, we do not have access to the testing data to compute these metrics. Indeed, MRBrainS18 Challenge organizers independently perform the evaluation of our results and only provide the mean dice coefficient, mean volume similarity and the mean 95% Hausdorff distance as metrics. To present fair comparison across the different data sets, and to be as clinically relevant as possible, we considered that the Dice metric was the best choice among these metrics. The other MRBrainS18 metrics will be reported appropriately for completeness.\n\n\n>Brain stem lesions (\u201cWhy using a different class for brain stem?\u201d)\nWe thank the reviewer for these important questions in relation to the brainstem and its segmentation. We acknowledge that the brainstem is anatomically made up of white pathways and deep grey matter nuclei. However, similar to the reasons for choosing Dice as a metric, to enable a controlled and systematic evaluation approach, we relied on the previously established tissue class annotations for MRBrainS18, with the brainstem being represented discreetly. This allows us to perform a comparison in learning approaches as applied to the MRBrainS18 dataset.   \n\nIn terms of lesion distribution within the brainstem and its tissue classes (\u201cCan lesions be in two different tissue classes?\u201d), a spectrum of possibilities is acknowledged [1]. Some pathologies will have a predilection for the brainstem e.g. brainstem vascular infarcts, paediatric brainstem gliomas [2]. Conversely distributed systemic conditions such as Multiple sclerosis (MS) may concurrently affect white matter pathways in the cortex and the brainstem [3]. Finally, both white and grey matter atrophy may occur in MS, with localisation to the brainstem [4].\n\n\n\n\n[1] Comparative Brain Stem Lesions on MRI of Acute Disseminated Encephalomyelitis, Neuromyelitis Optica, and Multiple Sclerosis. Lu et al., 2011\n[2] Measurable Supratentorial White Matter Volume Changes in Patients with Diffuse Intrinsic Pontine Glioma Treated with an Anti-Vascular Endothelial Growth Factor Agent, Steroids, and Radiation. Svolos et al., 2017\n[3] Imaging white matter in human brainstem. Ford et al., 2013\n[4] Progression of regional grey matter atrophy in multiple sclerosis. Eshaghi et al., 2018\n"}, {"title": "Paper summarised to 9 pages + SoTA comparison eased by MRBrainS18", "comment": "We would like to thank the reviewer for their comments. Below is a point-by-point answer with original text quoted in italics.\n\n>Comparison with other MIDL 2019 submission works (\u201ca closely related paper have been submitted for the MIDL\u201d)\nWe were quite encouraged to see another MIDL 2019 submission dealing with a similar topic, as it confirms that the topic of leveraging existing data sets to perform multi-task learning is attracting research interest. \n\nThe other MIDL submission proposes a modified cross entropy loss function for dealing with missing annotations in order to perform a joint segmentation of three brain tissue classes and brain lesions. Their model is trained using two task-specific data sets with partial annotations. In contrast to our approach, the same set of modalities is provided in the two data sets. In presence of missing annotations, the prediction over the missing classes are computed as background in the loss function. \n\nOur method performs joint segmentation of six brain tissue classes and brain lesions. Our model is trained using two task-specific data sets providing partial annotations. In addition to the domain gap between the data sets, our data sets are hetero-modal (T1 or T1+Flair). We propose an upper-bound of the expected risk of the joint problem in presence of missing annotations and missing modalities.\n\nIn terms of performance, the comparison is not straightforward because: 1/ the other submission tissue classes is a subset of our tissue classes; 2/ they evaluated their results on the MRBrainS18 training data and did not submit their model to the MRBrainS18 Challenge. However, it seems that we both obtain similar scores for the white matter lesion on MRBrainS18. In the future, fairer comparison would be possible if the authors of related submissions submitted their results to the MRBrainS18 Challenge.\n\n\n>More detailed scores (\u201cadding a comparison (where is possible) with the traditional approaches\u201d)\nThe MRBrainS18 participants do not have access to the held-out evaluation data set (which are kept by the challenge organisers). Instead, we submit our model to the challenge organizers and receive the overall averaged scores from them. However, we agree with this reviewer that it would be beneficial to present the score distribution and will ask the organizers to provide us (and other participants) more detailed scores to allow us to include them in the final version of the manuscript.\n\nOne of the major benefits of evaluating our method on a challenge is to directly benchmark of our method with existing methods. In particular, the SPM team adapted their method to the challenge and submitted it. As shown in Table1, our joint learning method achieved better performances on 6 of the 7 classes. \n\n\n\t\t\t\t\t|MRBrainS18|\n\t\t\t\t\t| SPM| Ours |\nGray matter\t\t\t| 76.5 | 79.4 \t|\nWhite matter\t\t| 75.7 | 85.4  |\nBrainstem\t\t\t| 76.5 | 72.3  |\nBasal Ganglia\t\t| 74.7 | 75.3  |\nVentricles\t\t\t| 80.9 | 91.7  |\nCerebellum\t\t\t| 89.4 | 90.8  |\nWhite matter lesion\t| 40.8 | 53.7  |\nTable1: Comparison between our joint model (Ours) and SPM. For each class, the Dice Similarity Coefficient (%) has been computed.\n\n\n>Probabilistic inference (\u201cCould the authors [...] provide (in a near future) a whole probabilistic solution?\u201d)\nWe want to thank the reviewer for this constructive remark. Indeed, probabilistic inference is of high value as it notably allows to quantify uncertainty, a key feature in medical imaging applications. Very recent works (e.g. [1], [2]) have focused on providing practical means of capturing uncertainty in deep neural networks. In contrast to a cascaded pipeline approach, our method uses a single model which allows for a proper estimation of the model and data uncertainties. We plan to integrate uncertainty measures in our framework as a future work.\n\n\n>Choice of modalities (\u201ceffects of including more modalities\u201d)\nWe tested our model with T1 and Flair volumes. However, our framework is flexible and is, by construction, compatible with more modalities. We plan to test it on a tissue and glioma segmentation which requires more imaging modalities in future works.\n\n\n>Page limit and reformulation (\u201cSummariz[ing ...] must be done\u201d)\nWe took into consideration the reviewer remarks concerning the length of the text. By reformulating some parts of the text (notably section 2.4-2.5), removing blank spaces, changing figure sizes and merging the tables, the number of pages of our current updated version is 9 (excluding the references). We will keep working on it to make it as close as possible to 8 pages.\n\n[1] Uncertainty in Multitask Learning: Joint Representations for Probabilistic MR-only Radiotherapy Planning. Bragman, et al., 2018\n[2] Aleatoric uncertainty estimation with test-time augmentation for medical image segmentation with convolutional neural networks. Wang, et al. 2019\n"}, {"title": "Reviewer reply ", "comment": "I would like to thank the authors their clarity and the extended experiments. \n\nClearly, the work should be present at MIDL2019."}], "comment_replyto": ["BkeJrnHaQV", "H1ebewsnmV", "rklZHs5wmE", "BkljtyKT4E"], "comment_url": ["https://openreview.net/forum?id=HJeZW_QxxN&noteId=BygW5q_a4E", "https://openreview.net/forum?id=HJeZW_QxxN&noteId=SkxPeRdTE4", "https://openreview.net/forum?id=HJeZW_QxxN&noteId=BkljtyKT4E", "https://openreview.net/forum?id=HJeZW_QxxN&noteId=S1gVxwZAV4"], "meta_review_cdate": 1551356573029, "meta_review_tcdate": 1551356573029, "meta_review_tmdate": 1551881973889, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The authors present a principled approach for multi-task learning from different hetero-modal datasets that are annotated for one specific task and demonstrated its application to joint tissue and lesion segmentation from brain MRI datasets. All reviewers express high enthusiasm for the work and agree that it would be a important contribution to the conference. \n\nPros:\n- The problem of multi-task learning from task-specific hetero-modal datasets is important to make full use of limited clinical data.\n- The proposed method is mathematically grounded.\n- Paper is fairly well-written.\n- Experimental validation includes the use of challenge data for objective benchmarking of results.\n\nCons:\n- There are some weaknesses in the experimental evaluation (although many criticisms have been addressed by the authors' comments here).\n- Long length of the paper and in particular some of the math explanations make it difficult for the reader to digest (although authors have commented that they have reformatted to reduce paper length).\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HJeZW_QxxN&noteId=B1xHjfLHLN"], "decision": "Accept"}