AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_S1gTA5VggE.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame
No virus
34.8 kB
{"forum": "S1gTA5VggE", "submission_url": "https://openreview.net/forum?id=S1gTA5VggE", "submission_content": {"title": "Boundary loss for highly unbalanced segmentation", "authors": ["Hoel Kervadec", "Jihene Bouchtiba", "Christian Desrosiers", "\u00c9ric Granger", "Jose Dolz", "Ismail Ben Ayed"], "authorids": ["hoel.kervadec.1@etsmtl.net", "jihene.bouchtiba.1@ens.etsmtl.ca", "christian.desrosiers@etsmtl.ca", "eric.granger@etsmtl.ca", "jose.dolz@etsmtl.ca", "ismail.benayed@etsmtl.ca"], "keywords": ["Surface loss", "unbalanced dataset", "semantic segmentation", "deep learning"], "TL;DR": "We propose a boundary loss based on L2 distance and evaluate it on two highly unbalanced segmentation problems.", "abstract": "Widely used loss functions for convolutional neural network (CNN) segmentation, e.g., Dice or cross-entropy, are based on integrals (summations) over the segmentation regions. Unfortunately, it is quite common in medical image analysis to have highly unbalanced segmentations, where standard losses contain regional terms with values that differ considerably -- typically of several orders of magnitude -- across segmentation classes, which may affect training performance and stability. The purpose of this study is to build a boundary loss, which takes the form of a distance metric on the space of contours (or shapes), not regions. We argue that a boundary loss can mitigate the difficulties of regional losses in the context of highly unbalanced segmentation problems because it uses integrals over the boundary (interface) between regions instead of unbalanced integrals over regions. Furthermore, a boundary loss provides information that is complimentary to regional losses. Unfortunately, it is not straightforward to represent the boundary points corresponding to the regional softmax outputs of a CNN. Our boundary loss is inspired by discrete (graph-based) optimization techniques for computing gradient flows of curve evolution. Following an integral approach for computing boundary variations, we express a non-symmetric L2 distance on the space of shapes as a regional integral, which avoids completely local differential computations involving contour points. Our boundary loss is the sum of linear functions of the regional softmax probability outputs of the network. Therefore, it can easily be combined with standard regional losses and implemented with any existing deep network architecture for N-D segmentation. \nOur boundary loss has been validated on two benchmark datasets corresponding to difficult, highly unbalanced segmentation problems: the ischemic stroke lesion (ISLES) and white matter hyperintensities (WMH). Used in conjunction with the region-based generalized Dice loss (GDL), our boundary loss improves performance significantly compared to GDL alone, reaching up to 8% improvement in Dice score and 10% improvement in Hausdorff score. It also yielded a more stable learning process. Our code is publicly available. ", "pdf": "/pdf/40c814f3bec79bda99828d1d97622e3e0a5cccb1.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "kervadec|boundary_loss_for_highly_unbalanced_segmentation", "_bibtex": "@inproceedings{kervadec:MIDLFull2019a,\ntitle={Boundary loss for highly unbalanced segmentation},\nauthor={Kervadec, Hoel and Bouchtiba, Jihene and Desrosiers, Christian and Granger, {\\'E}ric and Dolz, Jose and Ayed, Ismail Ben},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=S1gTA5VggE},\nabstract={Widely used loss functions for convolutional neural network (CNN) segmentation, e.g., Dice or cross-entropy, are based on integrals (summations) over the segmentation regions. Unfortunately, it is quite common in medical image analysis to have highly unbalanced segmentations, where standard losses contain regional terms with values that differ considerably -- typically of several orders of magnitude -- across segmentation classes, which may affect training performance and stability. The purpose of this study is to build a boundary loss, which takes the form of a distance metric on the space of contours (or shapes), not regions. We argue that a boundary loss can mitigate the difficulties of regional losses in the context of highly unbalanced segmentation problems because it uses integrals over the boundary (interface) between regions instead of unbalanced integrals over regions. Furthermore, a boundary loss provides information that is complimentary to regional losses. Unfortunately, it is not straightforward to represent the boundary points corresponding to the regional softmax outputs of a CNN. Our boundary loss is inspired by discrete (graph-based) optimization techniques for computing gradient flows of curve evolution. Following an integral approach for computing boundary variations, we express a non-symmetric L2 distance on the space of shapes as a regional integral, which avoids completely local differential computations involving contour points. Our boundary loss is the sum of linear functions of the regional softmax probability outputs of the network. Therefore, it can easily be combined with standard regional losses and implemented with any existing deep network architecture for N-D segmentation. \nOur boundary loss has been validated on two benchmark datasets corresponding to difficult, highly unbalanced segmentation problems: the ischemic stroke lesion (ISLES) and white matter hyperintensities (WMH). Used in conjunction with the region-based generalized Dice loss (GDL), our boundary loss improves performance significantly compared to GDL alone, reaching up to 8{\\%} improvement in Dice score and 10{\\%} improvement in Hausdorff score. It also yielded a more stable learning process. Our code is publicly available. },\n}"}, "submission_cdate": 1544731348782, "submission_tcdate": 1544731348782, "submission_tmdate": 1561400074291, "submission_ddate": null, "review_id": ["SkxQgkLuQV", "Bkegj4W-QE", "Byx-VbQp7V"], "review_url": ["https://openreview.net/forum?id=S1gTA5VggE&noteId=SkxQgkLuQV", "https://openreview.net/forum?id=S1gTA5VggE&noteId=Bkegj4W-QE", "https://openreview.net/forum?id=S1gTA5VggE&noteId=Byx-VbQp7V"], "review_cdate": [1548406506946, 1547928727958, 1548722472647], "review_tcdate": [1548406506946, 1547928727958, 1548722472647], "review_tmdate": [1548856728047, 1548856712732, 1548856689615], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper97/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper97/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper97/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["S1gTA5VggE", "S1gTA5VggE", "S1gTA5VggE"], "review_content": [{"pros": "This nicely written and enjoyable paper proposes a loss function focusing on boundary errors as a complement to classical regional scores of segmentation overlap and presents experiments on data\n\nThis paper is well written and easy to follow. The motivation of the proposed work is well presented and the adopted method clearly detailed and well illustrated.\n\nResults are very encouraging and the potential generalisability of the use of this additional loss term is high increasing the potential significance of this piece of work.", "cons": "A few points remain questionable and would benefit further clarification\n\nMethods\n- From equation 5 it seems that the absence of segmentation output will yield a null value for the loss. Is this a truly desirable behaviour?\n- In case of multiple, potentially coalescing objects and/or when the border function is of a complex shape, the closest point in distance may not be the appropriate one to consider for the comparison. Is an object defined constrained possible in that situation?\n\nExperiments\n- Could you please confirm that in the validation set for the WMH challenge, elements from all three scanners were used? Could you give the range of lesion load in these cases? Was it chosen to reflect the existing distribution?\n- Although the choice of 2D may seem reasonable for data with highly anisotropic resolution as in the ISLES challenge, this choice is more questionable in the WMH challenge where data is 3D. Moreover, the objects to segment being volumetric, the experiment would be much more interesting when going to the tridimensional complexity.\n- To complement the experiments, it would be interesting to observe the behaviour of the boundary loss alone and of a case of training with fixed weight.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "Summary:\nThis paper considers an alternative to region-overlap-based loss functions. Their boundary loss considers the integral of the area between the ground truth and predicted regions. It seems this loss function is less affected by class imbalance in the image, as it produces accurate segmentations for small and rare regions in their example figure.\n\nPros:\n- Novel idea for dealing with imbalanced classes. \n- Good reasoning for design of loss function.\n- Visualizations look nice.\n- Code is available online.\n\nQuestions:\n- You argue that most people only consider regional losses. But what about Hausdorff distance? That is based on the distance between boundaries of regions.\n- Sec. 1) \"..these regional integrals are summations over the segmentation regions of differentiable functions, each invoking..\". What are the differentiable functions here? Cross-entropy loss?\n- Sec. 1) \"... [graph-based] optimization for computing gradient flows of curve evolution.\" Can you explain a bit more what this is about?\n- Sec. 2) What do you mean with \\mathbb{R}^{2,3}? That the image can be either 2D or 3D?\n- Sec. 2) You defined 'I' as a training image and then didn't use it. Wouldn't it suffice to just say that Omega is your space of images?\n- Eq. 1) I assume the subscript B in w_B is from 'background region', i.e. B = \\Omega \\setminus G? And not 'boundary' as the subscript B in (5)?\n- Sec. 2) I don't understand why you would use the notation '\\partial G'. I would read that as 'change in the foreground region'.\n- Sec. 2) Is q_{\\partial S}(.) unique? I can imagine that if \\partial G is not a circle (as in your example fig. 2), then multiple p would map to the same point on \\partial S.\n- Sec. 2) Is the signed distance between p and z_{\\partial G}(p) Euclidean?\n- Sec. 2) Is the sign in the signed distance necessary to flip the sign of the area of S in the interior of G (the part \"below the x-axis of the integral\" as it were)?\n- Sec. 2) What is actually the form of the level set function \\phi_{G}? Pixel distance?\n- Sec. 2) If the 'boundary' is the sum of linear functions of s_{\\theta}(p), then is its gradient constant?\n- Sec.3) Are you sure you are allowed to use ISLES and WMH for this paper? For WMH at least, there is a rule in the terms of participation that you are not allowed to use the data for scientific studies other than that of the challenge.\n- Sec.3.2) Why do you need to start with the regional loss term and then slowly build up to the boundary term?\n- Sec.3.3) I am now quite interested in the performance of {\\cal L}_{B} in isolation. Why did you not report that?\n- Sec.3.3) You argue that the boundary loss helps to stabilize the learning process. But isn't the change in noise that you observe in Fig 3. coming from a difference in scaling in the loss terms? That is, if the scale of the boundary loss is smaller than that of the regional loss, and you're gradually shifting towards the boundary loss, then I would expect smoother curves over time.\nSec. 4) You say that the framework \"..can be trivially extended..\" to 3D. What would that entail? An element-wise product between the 3D pixelwise distance tensor and the prediction tensor from the network?\n\nOther comments:\n- Sec. 1) double use of the word 'common'.\n- Sec. 2) 's' in \"Let .. denotes..\"\n- Eq. 1) int_{p \\in \\Omega} should be int_{\\Omega}", "cons": "Cons:\n- The authors did not compare to other loss functions designed to handle imbalanced classes. These were mentioned in the related work section as relevant.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "This is a very interesting and engaging paper that is a worthy contribution to MIDL. The introduction of a boundary loss is highly relevant, and it nicely ties up an intuitive sense (that errors should be weighted by a distance map) with theory. I appreciate the mathematical rigour, the clear writing, the nice motivations, and of course, tying DL together with some important theoretical insights that increasingly seem to be lost in the DL era. \n\n\n--Motivation\n\n\nA thought that the authors might find useful as an intuitive motivation: volume grows as N^3 whereas surface grows as N^2. Thus, the boundary loss helps mitigate effects of unbalanced segmentations by reducing the order of magnitude of the effect of changes in pixel values for small segmentations. ", "cons": "Minor\n\n-abstract could be tightened up; getting faster to the point would make it more engaging\n\nClarity\n\n--readers would probably appreciate how you got to (4)\n\nEvaluation\n--It would have been nice to see experiments with only boundary loss. Why was this not done? Were there stability or convergence issues, or did it just not work as well? It's a curious omission, and I think readers would liek to know if the loss can operate on its own or if it only works as an auxiliary loss. At the very least, I think the authors need to address this within the text with an explanation. \n--Obviously an ablation study would be welcome, but I think for MIDL, the evaluation is sufficient. One thing I'm curious about: depending on how the distance map is calculated, e.g., with pixel distance, the boundary loss can add significant weights to each softmax, effectively increasing the learning rate. A hyper-parameter sweep on a validation set for both experiment settings would assuage any worries that the extra performance was due in part to the increased effective learning rate. Or perhaps there is a more principled way to do this. \n-Also, it would have been nice to have seen experiments with other losses, e.g., CE. ", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "oral_presentation": ["Consider for oral presentation"]}], "comment_id": ["BJxxu-F0EV", "HJlZ9WFR44", "S1eZGNYRVV", "HJgSS4YRV4", "SkeHWmWbBE", "rJxIxQrbSN"], "comment_cdate": [1549861223793, 1549861257188, 1549861896974, 1549861949194, 1550025469216, 1550041837675], "comment_tcdate": [1549861223793, 1549861257188, 1549861896974, 1549861949194, 1550025469216, 1550041837675], "comment_tmdate": [1555945973217, 1555945972960, 1555945972700, 1555945972438, 1555945960284, 1555945960060], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper97/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper97/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper97/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper97/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper97/AnonReviewer2", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper97/AnonReviewer1", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "AnonReviewer2", "comment": "We thank the reviewer for the constructive comments. We are pleased about the positive comments regarding the relevance of introducing a boundary loss, as well as on the rigour of our mathematical formulation. In the following, we describe our response to your comments. \n \n> A thought that the authors might find useful as an intuitive motivation: volume grows as N^3 whereas surface grows as N^2. Thus, the boundary loss helps mitigate effects of unbalanced segmentations by reducing the order of magnitude of the effect of changes in pixel values for small segmentations.\nThat is a really interesting intuition, which we will consider in a revision of the work. \n\n> -abstract could be tightened up; getting faster to the point would make it more engaging\nThank you for the feedback. We will streamline the abstract to make it shorter and more straight to the point.\n\n> --readers would probably appreciate how you got to (4)\nWe will add an intermediate step connecting Eqs. (3) and (4), and add more details on the integration of the distance function over the normal segment connecting p and its projection onto the boundary of S (which establishes the link to the boundary distance in Eq. 2). \n\n> --It would have been nice to see experiments with only boundary loss. Why was this not done? Were there stability or convergence issues, or did it just not work as well? It's a curious omission, and I think readers would like to know if the loss can operate on its own or if it only works as an auxiliary loss. At the very least, I think the authors need to address this within the text with an explanation.\nFor completeness, we will add results describing the behaviour of boundary loss alone, which does not yield the same competitive results as a joint loss (i.e., boundary and region). We believe that this is due to the following technical facts, which we will discuss in a revision of the paper. In theory, the global optimum of our boundary loss corresponds to a negative value (when the softmax probabilities correspond to a non-empty foreground). However, an empty foreground (null values of the softmax probabilities almost everywhere) corresponds to low gradients. Therefore, this trivial solution is close a local minimum or a saddle point. This is the reason why we use our boundary loss in conjunction with a regional loss: the regional loss guides training during the first epochs and avoids getting stuck in such a trivial solution. Our scheduling method increases the weight of the boundary loss during training, with the boundary loss becoming very dominant (almost acting alone) towards the end of the training process. Also, it will be interesting to examine careful initialisations for boundary loss alone (without a regional loss). On a side note, this behaviour of boundary terms is conceptually similar to the behaviour of classical and popular contour-based energies for level set segmentation (e.g., geodesic active contours), which also require additional regional terms to avoid trivial solutions (i.e., empty foreground regions). \n\n> --Obviously an ablation study would be welcome, but I think for MIDL, the evaluation is sufficient. One thing I'm curious about: depending on how the distance map is calculated, e.g., with pixel distance, the boundary loss can add significant weights to each softmax, effectively increasing the learning rate. A hyper-parameter sweep on a validation set for both experiment settings would assuage any worries that the extra performance was due in part to the increased effective learning rate. Or perhaps there is a more principled way to do this.\nWe Agree. An ablation study would strengthen the paper significantly, and we intend to do so in a journal version of the work. \n\n> -Also, it would have been nice to have seen experiments with other losses, e.g., CE.\nWe will add our experiments with CE as regional loss for completeness. In our experiments, we noticed that CE yielded a much lower performance than regional loss GDL for extremely unbalanced problems like WMH, which is consistent with several recent works in unbalanced medical image segmentation, e.g., (Milletari et al., 2016; Sudre et al., 2017). This is why we chose GDL as regional loss. However, our boundary loss is widely applicable as it can be readily integrated with any regional loss (e.g., CE) and any standard architecture. \n"}, {"title": "AnonReviewer1", "comment": "The reviewer\u2019s comments were of benefit to us. We are pleased that the reviewer found our paper to be well written and clearly motivated, and pointed to the potential significance and wide applicability of the work for the medical imaging community. In the following, we describe our response to your comments. \n\n> From equation 5 it seems that the absence of segmentation output will yield a null value for the loss. Is this a truly desirable behaviour?\nYes, the reviewer is right. In theory, the global optimum of our boundary loss corresponds to a negative value of the loss (when the softmax probabilities correspond to a non-empty foreground). However, an empty foreground (almost null value of the loss) corresponds to low gradients. Therefore, this trivial solution is close a local minimum or a saddle point. This is the reason why we use our boundary loss in conjunction with a regional loss, to guide training at the first epochs and avoid getting stuck in such a trivial solution (In fact, the scheduling method we introduced increases the weight of the boundary loss during training, with the boundary loss becoming dominant at the end of training). On a side note, this behaviour of boundary terms is conceptually similar to the behaviour of classical and popular contour-based energies for level set segmentation (e.g., geodesic active contours), which also require additional regional terms to avoid trivial solutions (i.e., empty foreground regions). \n\n> - Could you please confirm that in the validation set for the WMH challenge, elements from all three scanners were used? Could you give the range of lesion load in these cases? Was it chosen to reflect the existing distribution?\nBoth the training and validation folds contained patients from the three scanners, and the percentage of lesions in the validation fold ranges from 0.01% to 0.43% of voxels, with a mean of 0.25% between patients, std of 0.32% and median of 0.04%. Patients were chosen randomly from the 50 provided annotated volumes. Future works will evaluate against the testing set of the challenge, which will allow us to compare to other state-of-the-art methods for this particular application.\n\n> - Although the choice of 2D may seem reasonable for data with highly anisotropic resolution as in the ISLES challenge, this choice is more questionable in the WMH challenge where data is 3D. Moreover, the objects to segment being volumetric, the experiment would be much more interesting when going to the tridimensional complexity.\nWe will add 3D evaluations for WMH in a journal extension. We chose to keep the same basic 2D setting for both datasets to draw conclusion as to the effect of adding our boundary loss. \n\n> - To complement the experiments, it would be interesting to observe the behaviour of the boundary loss alone and of a case of training with fixed weight.\nFor completeness, we will add results describing the behaviour of boundary loss alone. As discussed above, we believe a regional loss guides the learning at the beginning (first epochs) to avoid trivial solutions (such as empty foreground regions). Please note that our scheduling method increases the weight of the boundary loss during training, with the boundary loss becoming very dominant at the end of training (almost acting alone).\n"}, {"title": "AnonReviewer3 part 2", "comment": "> - Sec.3.2) Why do you need to start with the regional loss term and then slowly build up to the boundary term?\n> - Sec.3.3) I am now quite interested in the performance of {\\cal L}_{B} in isolation. Why did you not report that?\nFor completeness, we will add results describing the behaviour of boundary loss alone, which does not yield the same competitive results as a joint loss (i.e., boundary and region). We believe that this is due to the following technical facts, which we will discuss in a revision of the paper. In theory, the global optimum of our boundary loss corresponds to a negative value (when the softmax probabilities correspond to a non-empty foreground). However, an empty foreground (null values of the softmax probabilities almost everywhere) corresponds to low gradients. Therefore, this trivial solution is close a local minimum or a saddle point. This is the reason why we use our boundary loss in conjunction with a regional loss: the regional loss guides training during the first epochs and avoids getting stuck in such a trivial solution. Our scheduling method increases the weight of the boundary loss during training, with the boundary loss becoming very dominant (almost acting alone) towards the end of the training process. Also, it will be interesting to examine careful initialisations for boundary loss alone (without a regional loss). On a side note, this behaviour of boundary terms is conceptually similar to the behaviour of classical and popular contour-based energies for level set segmentation (e.g., geodesic active contours), which also require additional regional terms to avoid trivial solutions (i.e., empty foreground regions). \n\n\n> - Sec.3.3) You argue that the boundary loss helps to stabilize the learning process. But isn't the change in noise that you observe in Fig 3. coming from a difference in scaling in the loss terms? That is, if the scale of the boundary loss is smaller than that of the regional loss, and you're gradually shifting towards the boundary loss, then I would expect smoother curves over time.\nThe scale of the boundary loss in not smaller than the regional loss (due to the distance function). Hence, when shifting over time to the boundary loss, the scale of the gradient actually increases.\n\n> Sec. 4) You say that the framework \"..can be trivially extended..\" to 3D. What would that entail? An element-wise product between the 3D pixelwise distance tensor and the prediction tensor from the network?\nYes. The formulation remains the same for 3D (\\Omega is either 2D or 3D). From implementation point of view, we just need to compute the distance function in 3D and use it with any choice of a 3D segmentation network.\n\n> Other comments:\n> - Sec. 1) double use of the word 'common'.\n> - Sec. 2) 's' in \"Let .. denotes..\"\n> - Eq. 1) int_{p \\in \\Omega} should be int_{\\Omega}\nThey will be corrected in the final version. Thanks! \n\n> Cons: Cons: - The authors did not compare to other loss functions designed to handle imbalanced classes. These were mentioned in the related work section as relevant.\nWe used GDL as regional loss and compared to GDL alone (please refer to the \nblue curve in Fig. 3). GDL was designed for unbalanced problems (Sudre et al., 2017). For instance, in our experiments, we noticed that CE yielded a much lower performance than GDL for highly unbalanced problems like WMH, which is consistent with several recent works in unbalanced medical image segmentation, e.g., (Milletari et al., 2016; Sudre et al., 2017). \n\n"}, {"title": "AnonReviewer3 part 1", "comment": "We are pleased about the positive comments on the novelty and relevance of introducing boundary loss. In the following, we describe our response to your comments. \n\n> - You argue that most people only consider regional losses. But what about Hausdorff distance? That is based on the distance between boundaries of regions.\nBoundary metrics, like the Hausdorff distance, are non-differentiable. Therefore, using standard boundary metrics as losses is a non-trivial task. In fact, a typical way to define a boundary metric is to identify discrete marker points on the boundary. How to express these discrete points as differentiable functions of the regional softmax outputs of a deep network is not trivial. This is the problem that our paper is addressing by using an integral approximation of the L2 distance between two contours.\n\n> - Sec. 1) \"..these regional integrals are summations over the segmentation regions of differentiable functions, each invoking..\". What are the differentiable functions here? Cross-entropy loss?\nCross-entropy, dice or generalized Dice loss. \n> - Sec. 1) \"... [graph-based] optimization for computing gradient flows of curve evolution.\" Can you explain a bit more what this is about?\nCurve evolution/level set and PDE methods were very popular segmentation techniques (before the DL era), e.g., the Chan and Vese model (TIP\u201900). Also, discrete graph cut optimizers are very popular in computer vision for their global optimality guarantee and efficiency, e.g., the Boykov-Kolmogorov algorithm (TPAMI\u201904). Our boundary loss is inspired from Geo-Cuts (Boykov et al. ECCV\u201906), which aims at encoding curve evolution with regional integrals (so as to accommodate curve evolution/PDE with powerful graph cuts). \n \n> - Sec. 2) What do you mean with \\mathbb{R}^{2,3}? That the image can be either 2D or 3D?\nYes. \n> - Sec. 2) You defined 'I' as a training image and then didn't use it. Wouldn't it suffice to just say that Omega is your space of images?\nIntroducing \u2018I\u2019 helps to clarify the difference between the image from the spatial domain (which is reused in subsequent equations).\n> - Eq. 1) I assume the subscript B in w_B is from 'background region', i.e. B = \\Omega \\setminus G? And not 'boundary' as the subscript B in (5)?\nTrue, we have some overlap in the notation that we will need to address. Thank you for pointing that out.\n> - Sec. 2) I don't understand why you would use the notation '\\partial G'. I would read that as 'change in the foreground region'.\nWe used \\partial G to denote the boundary of region G. We borrowed this notation from classical curve evolution and PDE methods; See, for instance, the appendix of the classical region competition work by Zhu and Yuille (TPAMI\u201996). \n> - Sec. 2) Is q_{\\partial S}(.) unique? I can imagine that if \\partial G is not a circle (as in your example fig. 2), then multiple p would map to the same point on \\partial S.\nYes, but the proof based on the integration of the distance function over the normal segment connecting a point p and its projection onto the boundary of S still holds. \n> - Sec. 2) Is the signed distance between p and z_{\\partial G}(p) Euclidean?\nYes.\n> - Sec. 2) Is the sign in the signed distance necessary to flip the sign of the area of S in the interior of G (the part \"below the x-axis of the integral\" as it were)?\nThe signed distance arises from the the approximation of the L2 distance between the contours. \n> - Sec. 2) What is actually the form of the level set function \\phi_{G}? Pixel distance? \nWe used pixel distance in this case, because the pixel resolution was consistent in our 2D images. We might change it to mm distance when extending to 3D, where the spatial resolution might be different for the x, y and z axis.\n> - Sec. 2) If the 'boundary' is the sum of linear functions of s_{\\theta}(p), then is its gradient constant?\nThe gradient wrt s_\\theta is the distance map.\n> - Sec.3) Are you sure you are allowed to use ISLES and WMH for this paper? For WMH at least, there is a rule in the terms of participation that you are not allowed to use the data for scientific studies other than that of the challenge.\nThank you for pointing that out. We obtained authorization from the challenge organizers, and submitted an entry to the challenge, which will be included in the revised version of the manuscript.\n"}, {"title": "Final response", "comment": "I very much appreciate the authors' careful and thoughtful responses. \n\nThe extra explanations regarding the saddle point issue of the boundary loss and the extra experiments with the other losses elevate this work even further. I think readers will appreciate these points. \n\nThe extra scaling that the boundary loss experiences, due to the pixel distance, which then increases the effective learning rate, is the only concern of note that still remains. I would not be surprised if this made no difference (or maybe even hampered performance), but I think it's worth ruling out. Having said this, I don't think this is near significant enough to cause issues for presenting at MIDL, especially as there may no straightforward way to normalize this out (although I urge the authors to consider it). \n\nIn any event, I do look forward to follow-up work and I am very pleased to be able to reaffirm my acceptance of this work. \n"}, {"title": "Answer to rebuttal", "comment": "Many thanks for this detailed answer and appropriate additional explanations.\nI am looking forward to seeing the extended journal version of this work\n"}], "comment_replyto": ["Byx-VbQp7V", "SkxQgkLuQV", "Bkegj4W-QE", "Bkegj4W-QE", "BJxxu-F0EV", "HJlZ9WFR44"], "comment_url": ["https://openreview.net/forum?id=S1gTA5VggE&noteId=BJxxu-F0EV", "https://openreview.net/forum?id=S1gTA5VggE&noteId=HJlZ9WFR44", "https://openreview.net/forum?id=S1gTA5VggE&noteId=S1eZGNYRVV", "https://openreview.net/forum?id=S1gTA5VggE&noteId=HJgSS4YRV4", "https://openreview.net/forum?id=S1gTA5VggE&noteId=SkeHWmWbBE", "https://openreview.net/forum?id=S1gTA5VggE&noteId=rJxIxQrbSN"], "meta_review_cdate": 1551356570553, "meta_review_tcdate": 1551356570553, "meta_review_tmdate": 1551881973099, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "This work proposes a new segmentation loss which measures boundary distances via level set functions. In particular, this loss is meant to address segmentation label imbalances. Experimental results on the ISLES and the WMH datasets show improved segmentation performance when combining the proposed loss with the generalized Dice loss. All reviewers agree that this is nice work and that the manuscript is well written. All three reviewers recommend accept. As mentioned by the reviewers, results or a discussion regarding the use of the boundary loss by itself (and not only in combination with the generalized Dice loss) are missing and should be added to a final version. Also, if possible, it would be good to add results on the actual ISLES test set to illustrate how the proposed segmentation approach compares to other approaches for this segmentation task.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=S1gTA5VggE&noteId=HJgmizUS8N"], "decision": "Accept"}