AMSR / conferences_raw /midl19 /MIDL.io_2019_Conference_HyxX96_xeN.json
mfromm's picture
Upload 3539 files
fad35ef
raw
history blame contribute delete
No virus
22.9 kB
{"forum": "HyxX96_xeN", "submission_url": "https://openreview.net/forum?id=HyxX96_xeN", "submission_content": {"title": "Capturing Single-Cell Phenotypic Variation via Unsupervised Representation Learning", "authors": ["Maxime W Lafarge", "Juan C Caicedo", "Anne E Carpenter", "Josien PW Pluim", "Shantanu Singh", "Mitko Veta"], "authorids": ["m.w.lafarge@tue.nl", "jcaicedo@broadinstitute.org", "anne@broadinstitute.org", "j.pluim@tue.nl", "shsingh@broadinstitute.org", "m.veta@tue.nl"], "keywords": [], "abstract": "We propose a novel variational autoencoder (VAE) framework for learning representations of cell images for the domain of image-based profiling, important for new therapeutic discovery. Previously, generative adversarial network-based (GAN) approaches were proposed to enable biologists to visualize structural variations in cells that drive differences in populations. However, while the images were realistic, they did not provide direct reconstructions from representations, and their performance in downstream analysis was poor.\n\nWe address these limitations in our approach by adding an adversarial-driven similarity constraint applied to the standard VAE framework, and a progressive training procedure that allows higher quality reconstructions than standard VAEs. The proposed models improve classification accuracy by 22% (to 90%) compared to the best reported GAN model, making it competitive with other models that have higher quality representations, but lack the ability to synthesize images. This provides researchers a new tool to match cellular fingerprints effectively, and also to gain better insight into cellular structure variations that are driving differences between populations of cells.", "pdf": "/pdf/c0c1dc328c4c33bbf8587e8194c7168290f813f2.pdf", "code of conduct": "I have read and accept the code of conduct.", "remove if rejected": "(optional) Remove submission if paper is rejected.", "paperhash": "lafarge|capturing_singlecell_phenotypic_variation_via_unsupervised_representation_learning", "_bibtex": "@inproceedings{lafarge:MIDLFull2019a,\ntitle={Capturing Single-Cell Phenotypic Variation via Unsupervised Representation Learning},\nauthor={Lafarge, Maxime W and Caicedo, Juan C and Carpenter, Anne E and Pluim, Josien PW and Singh, Shantanu and Veta, Mitko},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=HyxX96_xeN},\nabstract={We propose a novel variational autoencoder (VAE) framework for learning representations of cell images for the domain of image-based profiling, important for new therapeutic discovery. Previously, generative adversarial network-based (GAN) approaches were proposed to enable biologists to visualize structural variations in cells that drive differences in populations. However, while the images were realistic, they did not provide direct reconstructions from representations, and their performance in downstream analysis was poor.\n\nWe address these limitations in our approach by adding an adversarial-driven similarity constraint applied to the standard VAE framework, and a progressive training procedure that allows higher quality reconstructions than standard VAEs. The proposed models improve classification accuracy by 22{\\%} (to 90{\\%}) compared to the best reported GAN model, making it competitive with other models that have higher quality representations, but lack the ability to synthesize images. This provides researchers a new tool to match cellular fingerprints effectively, and also to gain better insight into cellular structure variations that are driving differences between populations of cells.},\n}"}, "submission_cdate": 1544748426812, "submission_tcdate": 1544748426812, "submission_tmdate": 1561398229014, "submission_ddate": null, "review_id": ["HJxRTet37E", "B1g0M37cm4", "BJgZAhC4mE"], "review_url": ["https://openreview.net/forum?id=HyxX96_xeN&noteId=HJxRTet37E", "https://openreview.net/forum?id=HyxX96_xeN&noteId=B1g0M37cm4", "https://openreview.net/forum?id=HyxX96_xeN&noteId=BJgZAhC4mE"], "review_cdate": [1548681414461, 1548528661544, 1548180680954], "review_tcdate": [1548681414461, 1548528661544, 1548180680954], "review_tmdate": [1548856754966, 1548856735746, 1548856718225], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper121/AnonReviewer3"], ["MIDL.io/2019/Conference/Paper121/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper121/AnonReviewer1"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HyxX96_xeN", "HyxX96_xeN", "HyxX96_xeN"], "review_content": [{"pros": "The authors present an unsupervised approach that combines variational auto-encoders (VAE) to get a representation of the input image, with a convolutional neural network discriminator that evaluates obtained representations. Through this approach, the authors aim to face the following points:\n- The obtaining of an accurate latent space of all the images (embeddings) that allows for detecting the mechanisms-of-action (MOA) of the chemical used to treat cells.\n- A model that provides an accurate reconstruction of each image.\n\nThe main difference between the work of Larsen et al 2016 and this one, is the definition of loss functions: Instead of integrating the loss function of a GAN into the VAE loss function, as done by Larsen et al, 2016, the loss of the VAE and the loss of the discriminator are combined in a way that they complement each other. \n\nWith the proposed approach, the authors have obtained a good balance between both tasks: accurate detection of MOA (even if it does not outperform the results of Ando et al, 2017) and realistic reconstruction of images which are more accurate than the ones obtained when using GANs.\n\nBesides, the manuscript provides a precise and very updated review of state-of-the-art methods, which are taken into account in the proposed methodology. Most of the text is written in a clear way: the problem to solve is well illustrated, each of the procedures followed in this work is either described in detail or cited properly, and the results are exposed concisely.\n", "cons": "While both LVAE and LDi are defined, I miss a final complete expression in which it can be seen how both functions are combined.\n\n\"We conjecture that the reconstruction term in LVAE should not be discarded and that the additional losses LDi can be all used to compensate the limited reconstruction ability induced by LVAE, as opposed to the formulation of Larsen et al.\" The method of Larsen et al. 2016 was evaluated in a different field (reconstruction of human faces) so, to prove the statement done by the authors, in future work, I would recommend them to compare both methods \n\nSection 3.4. It is not clear the number of convolutional layers used in each part of the network. Please, specify it either in the text or in Figure 1 so the method can be reproducible. For instance, it is written \"All three CNNs have four convolution layers\", did you mean \"All the CNNs\"? or on the contrary, are you referring to the encoder, decoder and the discriminator? How many filters of size 5x5 do you have in each convolutional layer? Do you use zero padding? \n\n\"Images were randomly shuffled and presented to experts to assess whether each cell was real or synthetic.\" Are the biologists told whether the cells are treated or not, and which is the treatment in each case? Would this affect their classification?\n\nReferences. Please, review all the references and make sure that all of them are correctly written:\n- Claire McQuin, Allen Goodman, Vasiliy Chernyshev, Lee Kamentsky, Beth A Cimini, Kyle W Karhohs, Minh Doan, Liya Ding, Susanne M Rafelski, Derek Thirstrup, and Others. --> ... et al. instead of and Others,\n- arXiv and bioRXiv references: specify the version you are referring to and indicate always the name of the journal or site (in most cases it is not said)\n\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The paper is tackling an interesting problem and I also share the belief that imaging cell variations holds potential to learn representations which are predictive of function.\n\nThe motivation of this work is to have a method which is able to visually represent cells with high fidelity while also having a latent representation which captures information regarding impact of being treated with a compound.\n\nIn section 4.3.2 the paper discusses studying the difference in reconstructions by the AE vs VAE. This is very interesting. These differences should be studied more ask they could provide insight into what is different about the models and what image features are captured given what the models are designed to capture.\n", "cons": "The method does not perform better given the literature on learning unsupervised representations that are predictive of the compounds used. The main baseline in this paper is \"the best reported GAN model\" and not the non-GAN methods by Singh and Ando which are SOTA for this task. This is potentially misleading.\n\nGiven the motivation it is not clear why a new method VAE+ is proposed without sufficient evaluation of existing work. The most needed baseline is ALI (https://arxiv.org/abs/1606.00704) which uses an adversarial loss to learn a latent space with a gaussian prior. Also, InfoGAN (https://arxiv.org/abs/1606.03657) is another baseline to try and report results on.\n\nAlso, the results from the previous GAN method are not compared in table 1. It is important to put these numbers side by side given the same evaluation.\n\nAlso, the evaluation does not report variance. The evaluation should include a randomized train/valid/test split selection together with random model initializations. Given the evaluation now there is no guarantee that the VAE+ model improvements are significantly better. There is no way to compute a p-value.\n\nIf the VAE+ model offers a significant improvement a better venue is ICLR/NeurIPS/ICML with evaluations on multiple datasets to confirm that the method works. \n\n", "rating": "2: reject", "confidence": "3: The reviewer is absolutely certain that the evaluation is correct and very familiar with the relevant literature"}, {"pros": "1) proposed a VAE-based method to learn representations of cell images for cell profiling with adversarial similarity constraint and progressive training procedure, the proposed method explains more biological phenotype variations and achieved better performance compared to current methods based on generative models in the downstream task of classification;\n2) modified the loss function of the original VAEGAN and applied adversarial loss at multiple layers of the discriminator for more realistic reconstruction results, the idea of progressive training is novel.", "cons": "1) the authors compared the proposed method with AE and VAE, but VAEGAN, cited as Larsen et al. (2016) in the paper, is also a related method and should be compared with;\n2) VAE was also evaluated in the work of cytoGAN, which achieved 49% NSC, VAE in this paper achieved 82.5% NSC, are network architectures, experimental settings etc different in this paper?", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["rylu9yMi44", "r1lKb-ziNV", "BklPo-fiEN", "Bkg4d9fjNV"], "comment_cdate": [1549635472295, 1549635841029, 1549635999142, 1549638251902], "comment_tcdate": [1549635472295, 1549635841029, 1549635999142, 1549638251902], "comment_tmdate": [1555946010664, 1555946010450, 1555946010227, 1555946009083], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper121/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper121/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper121/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper121/AnonReviewer3", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Reply", "comment": "We would like to thank the reviewer for their comprehensive review and suggestions.\n\n> \u201cWhile both LVAE and LDi are defined, I miss a final complete expression \u2026\u201d\n Briefly, the full objective for the encoder and decoder optimization is L_full = L_VAE(tetha, phi) + L_D(tetha, phi, chi). We agree this will clarify the formulation better; we will include this in the revised manuscript. \n\n> \u201cThe method of Larsen et al. 2016 was evaluated in a different field\u2026so ... in future work, I would recommend them to compare both methods\u201d\n We agree that extending to methods that involve prior sampling (such as Larsen et al.) are potential valuable extensions (and that would include comparisons to the original VAEGAN model and models such as Adversarially Learned Inference). We will add such a statement in the conclusion.\n\n> \u201cSection 3.4. It is not clear the number of convolutional layers used...\u201d\n Indeed, by \u201call three CNNs [...]\u201d we meant \u201cThe encoder, decoder and discriminator CNNs [...]\u201d. We use 32 filters of size 5x5 in each convolutional layer. We do not use zero padding. We will clarify these points in the main text. Additionally, to share all the implementation details and to ensure reproducibility we are working on a clear and standalone version of the code; we will include a link in the revised manuscript.\n\n> \u201c Are the biologists told whether the cells are treated or not \u2026\u201d\n To clarify the real/synthetic assessment test with experts, the cells selected for this experiment were balanced across the available treatments, including controls, and the biologists were blinded w.r.t. to treatment/no treatment and treatment type, although, indeed, being told that the cells could be treated or untreated is useful information for their assessment. We will clarify this in the revised manuscript.\n\n> \u201cPlease, review all the references \u2026\u201d\n We thank the reviewer for noticing the mistakes in the references, we will apply these corrections."}, {"title": "Reply", "comment": "We thank the reviewer for their work and important remarks.\n\n> \u201cThe method does not perform better \u2026\u201d\n We will include results from Singh et al. and Ando et al. in Table 1 (we already refer to them in the text in Section 4.3.1). We will highlight the key fact that, being based on engineered features and transfer learning, respectively, they do not provide visualization capabilities, although they do have a greater ability to classify mechanism of action of compounds. \n\n> \u201cGiven the motivation it is not clear why a new method VAE+ is proposed without \u2026\u201d\n Our method is an extension of the model proposed by Larsen et al. 2016, which already resembles the principles of the ALI model, using an adversarial loss and a latent space with a Gaussian prior. The main difference between our model and the ALI model is the restrictions used for the discriminator during training. Investigating these differences is an interesting research direction that we now include in our future work.\n\n The motivation for using the proposed model is that we want to encode single-cell features and simultaneously have the ability of generating realistic samples conditioned on variations of these features. Our proposed model and ALI are plausible solutions for this task. \n\n Please note that our contribution here was introducing this family of methods (we chose one member of the family) to the problem of image-based profiling \u2013 which has not been done before \u2013 rather than proposing a new method, or exhaustively comparing the performance of all relevant methods.\n\n\n> \"Also, the results from the previous GAN method are not compared in table 1.\"\n We agree that this is useful to add this information to the table (we have already included it in main text in Section 4.3.1); we will do so in the revised manuscript.\n\n> \u201cAlso, the evaluation does not report variance\u2026\u201d\n We agree that statistical confidence on the performances of the compared method should be addressed. We are currently re-training the models several times to obtain variance of the performance based on the training/initialization randomness. However, we cannot ensure that we will be able to obtain a sufficiently large sampling before the deadline of the rebuttal period, but we will be happy to include the results in the revised manuscript.\n"}, {"title": "Reply", "comment": "We are grateful to the reviewer for their thorough assessment of our work and their precise remarks.\n\n> \u201cthe authors compared the proposed method with AE and VAE, but \u2026\u201d\n We agree that extending to methods that involve prior sampling (such as Larsen et al.) are potential valuable extensions (and that would include comparisons to the original VAEGAN model and models such as Adversarially Learned Inference). We will add such a statement in the conclusion.\n\n As mentioned above, please note that our contribution here was introducing this family of methods (we chose one member of the family) to the problem of image-based profiling \u2013 which has not been done before \u2013 rather than proposing a new method, or exhaustively comparing the performance of all relevant methods.\n\n\n> \u201cVAE was also evaluated in the work of cytoGAN,...\u201d\n Regarding the difference of performances with the previously reported VAE performances, indeed the model architecture and training procedure drive this. As mentioned above, to share all the implementation details and to ensure reproducibility, we are working on a clear and standalone version of the code; we will include a link in the revised manuscript."}, {"title": "Reply to authors", "comment": "I thank the authors for their answers to the comments.\n\nI agree with all their comments and I hope they implement all the changes they have mentioned. "}], "comment_replyto": ["HJxRTet37E", "B1g0M37cm4", "BJgZAhC4mE", "rylu9yMi44"], "comment_url": ["https://openreview.net/forum?id=HyxX96_xeN&noteId=rylu9yMi44", "https://openreview.net/forum?id=HyxX96_xeN&noteId=r1lKb-ziNV", "https://openreview.net/forum?id=HyxX96_xeN&noteId=BklPo-fiEN", "https://openreview.net/forum?id=HyxX96_xeN&noteId=Bkg4d9fjNV"], "meta_review_cdate": 1551356600025, "meta_review_tcdate": 1551356600025, "meta_review_tmdate": 1551881983683, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "The motivation of the manuscript is to develop a method able to visually represent cells with high fidelity while also having an accurate latent space of all the images (embeddings) that allows for detecting the mechanisms of action of the chemical used to treat cells. The variational autorencoder (VAE) method is proposed to learn representations of cell images for cell profiling with adversarial similarity constraint and progressive training procedure. \n\nThe problem tackled in the manuscript is interesting as imaging cell variations holds potential to learn representations which are predictive of function.\n\nThe main difference between the work of Larsen et al. 2016 and this one, is the definition of loss functions: Instead of integrating the loss function of a generative adversarial networks (GAN) into the VAE loss function, the loss of the VAE and the loss of the discriminator are combined in a way that they complement each other. \n\n\nPros:\n- The work is well motivated: The motivation for using the proposed model is that they want to encode single-cell features and simultaneously have the ability of generating realistic samples conditioned on variations of these features. \n- They have modified the loss function of the original VAEGAN and they have applied adversarial loss at multiple layers of the discriminator to obtain more realistic reconstruction results. The idea of progressive training is novel. \n- In the manuscript the differences in the reconstructions obtained by AE vs. VAE are discussed. This is interesting as they could provide insight into what is different about the models and what are the image features captured. \n- With the proposed approach, the authors have obtained a good balance between both tasks: accurate detection of mechanisms of action of the chemical used to treat the cells and realistic reconstruction of images which are more accurate than the ones obtained when using GANs. \n- The manuscript provides an update review of state-of-the-art methods, which are taken into account in the proposed methodology. Most of the text is written in a clear way; the problem to solve is well illustrated, each of the procedures followed in this work is either described in detail or cited properly, and the results are exposed concisely. \n- They are working in a clear and standalone version of the code and a link will be include in the revised manuscript. \n\n\nCons:\n- While both LVAE and LDi are defined, a final complete expression in which it can be seen how both functions are combined is missing. The authors would clarify this point in the camera ready manuscript. \n- The authors compared the proposed method with AE and VAE, but VAEGAN, cited as Larsen et al. (2016) in the paper, is also a related method and should be compared with. The authors agree that extending to methods that involve prior sampling are potential valuable extensions. They plan to add a statement in the conclusion although it seems that they do not plan to compare their approach with VAEGAN. \n- The proposed approach does not perform better that other methods reported in the literature (Singh et al. and Ando et al.). The authors proposed to include the results from this methods in the evaluation. They will also highlight the fact that being based on engineering features and transfer learning, respectively, they do not provide visualization capabilities, although they do have a greater ability to classify the compounds mechanism of action. \n- The evaluation does not report variance. The authors are currently re-training the models several times to obtain the variance of the performance based on the training/initialization randomness. \n- ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HyxX96_xeN&noteId=SJlx6GUSUE"], "decision": "Accept"}