{"forum": "If6dqlBcI", "submission_url": "https://openreview.net/forum?id=6PSAJwPCtP", "submission_content": {"authorids": ["y.mo16@imperial.ac.uk", "shuo.wang@imperial.ac.uk", "c.dai@imperial.ac.uk", "zt215@cam.ac.uk", "w.bai@imperial.ac.uk", "y.guo@imperial.ac.uk"], "abstract": "Supervised deep learning for medical imaging analysis requires a large amount of training samples with annotations (e.g. label class for classification task, pixel- or voxel-wised label map for medical segmentation tasks), which are expensive and time-consuming to obtain. During the training of a deep neural network, the annotated samples are fed into the network in a mini-batch way, where they are often regarded of equal importance. However, some of the samples may become less informative during training, as the magnitude of the gradient start to vanish for these samples. In the meantime, other samples of higher utility or hardness may be more demanded for the training process to proceed and require more exploitation. To address the challenges of expensive annotations and loss of sample informativeness, here we propose a novel training framework which adaptively selects informative samples that are fed to the training process. To evaluate the proposed idea, we perform an experiment on a medical image dataset IVUS for biophysical simulation task.", "paper_type": "methodological development", "authors": ["Yuanhan Mo", "Shuo Wang", "Chengliang Dai", "Zhongzhao Teng", "Wenjia Bai", "Yike Guo"], "track": "short paper", "keywords": ["Deep Learning", "Data Efficient", "Medical Imaging"], "title": "Suggestive Labelling for Medical Image Analysis by Adaptive Latent Space Sampling", "paperhash": "mo|suggestive_labelling_for_medical_image_analysis_by_adaptive_latent_space_sampling", "pdf": "/pdf/48192b3d13e2cc67512edf7a4f4b91202b1e0203.pdf", "_bibtex": "@misc{\nmo2020suggestive,\ntitle={Suggestive Labelling for Medical Image Analysis by Adaptive Latent Space Sampling},\nauthor={Yuanhan Mo and Shuo Wang and Chengliang Dai and Zhongzhao Teng and Wenjia Bai and Yike Guo},\nyear={2020},\nurl={https://openreview.net/forum?id=6PSAJwPCtP}\n}"}, "submission_cdate": 1579955662853, "submission_tcdate": 1579955662853, "submission_tmdate": 1587172178201, "submission_ddate": null, "review_id": ["_3fNHdHsP", "QVZCP8ITEh", "AD-FmZZD7_", "bZRM1CA6dW"], "review_url": ["https://openreview.net/forum?id=6PSAJwPCtP¬eId=_3fNHdHsP", "https://openreview.net/forum?id=6PSAJwPCtP¬eId=QVZCP8ITEh", "https://openreview.net/forum?id=6PSAJwPCtP¬eId=AD-FmZZD7_", "https://openreview.net/forum?id=6PSAJwPCtP¬eId=bZRM1CA6dW"], "review_cdate": [1584132108402, 1584124739528, 1584119900096, 1584039133344], "review_tcdate": [1584132108402, 1584124739528, 1584119900096, 1584039133344], "review_tmdate": [1585229612504, 1585229611998, 1585229611495, 1585229610921], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper83/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper83/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper83/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper83/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["If6dqlBcI", "If6dqlBcI", "If6dqlBcI", "If6dqlBcI"], "review_content": [{"title": "Review of Suggestive Labelling for Medical Image Analysis by Adaptive Latent Space Sampling", "review": "\nSummary:\n\nThe authors propose an active learning method using variational auto-encoder. They assess their method by predicting structural stress within vessel wall in Intravascular ultrasound. \n\nStrengths:\n\n-The idea of navigating in the latent space using the difference of the model\u2019s predict with ground truth is original and makes sense.\n-The article is well-written.\n\nWeaknesses:\n\n*I am unsure about the practical use of the method. The proposed method reaches its optimum performance at that same time than the baseline. You wouldn\u2019t want to consciously use a suboptimal system for medical research, would you?\n*The authors do not cite nor discuss relevant literature.\n*Some details are unclear when they could have easily been added without using additional space.\n\n\nDetailed comments:\n\n\u201cExperiment was repeated for five times for plotting the mean and variance (Figure 2(b))\u201d I don\u2019t see mean and variance in Figure 2.\n\n\u201cThe neural network model F is first trained for a couple of epochs with the current training set using a given loss function. \u201c How many epochs?\n\n\u201cpredefined size of training samples\u201d What is the size?\n\nThe authors could cite more relevant literature instead of the computer vision datasets. For example:\n\nKingma, D.P. and Welling, M., 2013. Auto-encoding variational bayes. arXiv preprint arXiv:1312.6114.\n\nBiffi, C., Oktay, O., Tarroni, G., Bai, W., De Marvao, A., Doumou, G., Rajchl, M., Bedair, R., Prasad, S., Cook, S. and O\u2019Regan, D., 2018, September. Learning interpretable anatomical features through deep generative models: Application to cardiac remodeling. In International Conference on Medical Image Computing and Computer-Assisted Intervention (pp. 464-471). Springer, Cham.\n", "rating": "3: Weak accept", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"title": "Review", "review": "This paper tackles an important question in machine learning on informative and efficient sampling of training data. This is somehow similar to a combination of active learning and hard negative mining, however, it is not justified in the paper what's the differences to the proposed method. In fact, the two areas are not mentioned at all.\n\npros:\n+ the latent space sampling is interesting, which I believe is a promising direction, although not justified in the paper.\n\ncons:\n- the result shown in Figure 2(b) is not convincing to me, since both methods plateau quickly with only ~32 samples. The proposed approach has a very small 'effective window' which could diminish the impact of proposed approach.\n- no comparison result provided to other hard negative mining methods.", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Interesting idea; poor experimental results.", "review": "The paper presents an interesting idea which holds greater value in medical imaging as motivated in the paper. Not sure how much evaluation is expected in the short paper, but in my opinion, the paper lacks enough experimental evidence to support the key hypothesis. Moreover, as shown in Fig. 2b, the improvement of the proposed method over baseline is not clear or not presented clearly. \n\nAlso, can authors present more examples of generated samples? \n\nIt could be understood that the current paper is more about the idea but the current model relies on the generative capacity of the model and VAE are well known for producing bad samples. The authors are suggested to consider [1] to improve sample quality or discuss the effect of sample quality in the current work. \n\n\n[1] Diagnosing and Enhancing VAE Models [Dai and Wipf, 2019]", "rating": "2: Weak reject", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct"}, {"title": "Suggestive Labeling for Medical Image Analysis by Adaptive Latent Space Sampling", "review": "Brief summary: \nVAE is used to encode the raw image to some feature representation in a latent space, the annotation suggestion is done based on the latent space. The supervised training loss provides some gradients that reach the latent space. Such gradients are used for selecting the next batch of images for annotation. \n\nQuality: Below average;\nClarity: Average;\nOriginality: New to me.\nSignificance: In terms of the experimental results, the improvement is not significant. The proposed method is compared to a simple random selection method. Supposedly, one should get much better results when comparing to random selection method.\n\nPros: interesting idea, interesting topic.\nCons: (1) Lack of comparisons with the state-of-the-art annotation suggestion methods. (2) The proposed method relies on gradient feedback from the supervised training loss for new sample selection. Such feedback only could give you some local movements in the latent space. In this sense, the selected samples might not be the most effective samples for the active learning task. (3) Sampled new point in the latent space may not correspond to a valid image sample after applying the decoder on it. Namely, the decoder can give you noisy \"image samples\". (4) No strong justification that we should do annotation suggestion in the proposed way.\n\n\n\n", "rating": "1: Strong reject", "confidence": "5: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}], "comment_id": [], "comment_cdate": [], "comment_tcdate": [], "comment_tmdate": [], "comment_readers": [], "comment_writers": [], "comment_reply_content": [], "comment_content": [], "comment_replyto": [], "comment_url": [], "meta_review_cdate": 1586175385146, "meta_review_tcdate": 1586175385146, "meta_review_tmdate": 1586175385146, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper83 by AreaChair1", "meta_review_metareview": "This short paper proposes an algorithm to select maximally useful batches for neural network training, based on the magnitude of the gradient as evaluated in a VAE latent space.\n\nWhile the idea is interesting, and appreciated by the reviewers, the method does not appear to outperform random sampling, and the authors also do not compare to other label-efficient methods such as active learning.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper83/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=6PSAJwPCtP¬eId=Xs2TvorH_Fp"], "decision": "reject"}