{"forum": "HJxrNvv0JN", "submission_url": "https://openreview.net/forum?id=HJxrNvv0JN", "submission_content": {"title": "Deep Reinforcement Learning for Subpixel Neural Tracking", "authors": ["Tianhong Dai", "Magda Dubois", "Kai Arulkumaran", "Jonathan Campbell", "Cher Bass", "Benjamin Billot", "Fatmatulzehra Uslu", "Vincenzo de Paola", "Claudia Clopath", "Anil Anthony Bharath"], "authorids": ["tianhong.dai15@imperial.ac.uk", "magda.dubois.18@ucl.ac.uk", "kailash.arulkumaran13@imperial.ac.uk", "jonathan.campbell13@imperial.ac.uk", "c.bass14@imperial.ac.uk", "benjamin.billot.18@ucl.ac.uk", "f.uslu13@imperial.ac.uk", "vincenzo.depaola@csc.mrc.ac.uk", "c.clopath@imperial.ac.uk", "a.bharath@imperial.ac.uk"], "keywords": ["tracking", "tracing", "neuron", "axon", "reinforcement learning", "transfer learning"], "TL;DR": "Formulating tracing biological structures as an RL problem and using DRL to track synthetic axons, generalising to real data", "abstract": "Automatically tracing elongated structures, such as axons and blood vessels, is a challenging problem in the field of biomedical imaging, but one with many downstream applications. Real, labelled data is sparse, and existing algorithms either lack robustness to different datasets, or otherwise require significant manual tuning. Here, we instead learn a tracking algorithm in a synthetic environment, and apply it to tracing axons. To do so, we formulate tracking as a reinforcement learning problem, and apply deep reinforcement learning techniques with a continuous action space to learn how to track at the subpixel level. We train our model on simple synthetic data and test it on mouse cortical two-photon microscopy images. Despite the domain gap, our model approaches the performance of a heavily engineered tracker from a standard analysis suite for neuronal microscopy. We show that fine-tuning on real data improves performance, allowing better transfer when real labelled data is available. Finally, we demonstrate that our model's uncertainty measure\u2014a feature lacking in hand-engineered trackers\u2014corresponds with how well it tracks the structure.", "pdf": "/pdf/e9c306ab71106044c5a1b97ab6d425b924df64fa.pdf", "code of conduct": "I have read and accept the code of conduct.", "paperhash": "dai|deep_reinforcement_learning_for_subpixel_neural_tracking", "_bibtex": "@inproceedings{dai:MIDLFull2019a,\ntitle={Deep Reinforcement Learning for Subpixel Neural Tracking},\nauthor={Dai, Tianhong and Dubois, Magda and Arulkumaran, Kai and Campbell, Jonathan and Bass, Cher and Billot, Benjamin and Uslu, Fatmatulzehra and Paola, Vincenzo de and Clopath, Claudia and Bharath, Anil Anthony},\nbooktitle={International Conference on Medical Imaging with Deep Learning -- Full Paper Track},\naddress={London, United Kingdom},\nyear={2019},\nmonth={08--10 Jul},\nurl={https://openreview.net/forum?id=HJxrNvv0JN},\nabstract={Automatically tracing elongated structures, such as axons and blood vessels, is a challenging problem in the field of biomedical imaging, but one with many downstream applications. Real, labelled data is sparse, and existing algorithms either lack robustness to different datasets, or otherwise require significant manual tuning. Here, we instead learn a tracking algorithm in a synthetic environment, and apply it to tracing axons. To do so, we formulate tracking as a reinforcement learning problem, and apply deep reinforcement learning techniques with a continuous action space to learn how to track at the subpixel level. We train our model on simple synthetic data and test it on mouse cortical two-photon microscopy images. Despite the domain gap, our model approaches the performance of a heavily engineered tracker from a standard analysis suite for neuronal microscopy. We show that fine-tuning on real data improves performance, allowing better transfer when real labelled data is available. Finally, we demonstrate that our model's uncertainty measure{\\textemdash}a feature lacking in hand-engineered trackers{\\textemdash}corresponds with how well it tracks the structure.},\n}"}, "submission_cdate": 1544611629014, "submission_tcdate": 1544611629014, "submission_tmdate": 1561398183176, "submission_ddate": null, "review_id": ["H1eomCX57E", "H1lugWKnQE", "HkxFYsPtmN"], "review_url": ["https://openreview.net/forum?id=HJxrNvv0JN¬eId=H1eomCX57E", "https://openreview.net/forum?id=HJxrNvv0JN¬eId=H1lugWKnQE", "https://openreview.net/forum?id=HJxrNvv0JN¬eId=HkxFYsPtmN"], "review_cdate": [1548529187354, 1548681455971, 1548479361297], "review_tcdate": [1548529187354, 1548681455971, 1548479361297], "review_tmdate": [1550044200984, 1548856755177, 1548856731602], "review_readers": [["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2019/Conference/Paper13/AnonReviewer2"], ["MIDL.io/2019/Conference/Paper13/AnonReviewer1"], ["MIDL.io/2019/Conference/Paper13/AnonReviewer3"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["HJxrNvv0JN", "HJxrNvv0JN", "HJxrNvv0JN"], "review_content": [{"pros": "This work proposes a reinforcement learning (RL) strategy for tracking elongated structures, specifically neural axon structures from two photon microscopy 2D images. The paper is well structured and easy to follow. Up to my knowledge, the specific idea of using reinforcement learning to track elongated structures given a seed point is original, and the authors propose an implementation using a state-of-the-art RL technique utilizing deep CNNs for the path predictions of the actor and the prediction of the expected risk in the value function. As such it follows recent successful ideas proposed by Mnih et al., 2015 (please note that reference Mnih et al misses the publication year), and adapts them to the tracking task. \n\nA strength of this work is that by using RL, the authors are able to train their tracking algorithm from an entirely synthetic dataset, and show that this trained agent is in principle able to solve the tracking task on their real-word two-photon microscopy image segmentation/tracking task, which depicts the axons of a mouse somatosensory cortex. Another strength of this work is the clear description of their algorithm (pseudocode), which makes it very likely for others to reproduce their results, as well as the fact that the authors promise to release their training data and code to the public in case of manuscript acceptance. ", "cons": "In my opinion, there are a number of stronger issues, mainly regarding the experimental evaluation, that diminish the scientific value of this work:\n\n- While the overall idea of the manuscript looks interesting, the evaluation is very much limited regarding choice of dataset and general applicability of the proposed original concept. The introduction is written in a spirit that claims the proposed method being able to (quote) \"alleviate the need for hand-engineered trackers for different biomedical image datasets\". This is not shown in the experiments, since a single real-world dataset containing thin, elongated structures is used, after training from a synthetic dataset that looks very similar to what kind of information is expected to be visible in the real-world dataset. To evaluate the claim, that the proposed method is generic, and kind of a meta strategy for learning how to track thin, elongated structures, performance on different, additional datasets would need to be shown. As it is presented in the manuscript, unfortunately only a very limited experimental evaluation is given, from which I do not get the impression that a novel concept has been developed, but solely that a very specific problem has been solved with a method, which may be overly complex for the task at hand.\n\n- Reinforcement learning has been used for medical image analysis tasks, e.g. the work of Ghesu et al., PAMI 2017 and Ghesu et al., MIA 2018, where actually the goal of localizing landmarks by letting an agent follow appearance information in volumetric CT data until a multitude of different landmark locations is reached, reminds me a lot of the work proposed in this manuscript. While I see room for further exploration of these RL based concepts in medical image analysis literature, it is necessary to mention these approaches in the related work section and to discuss commonalities and differences. \n\n- In the introduction, the authors argue about tracking vs. segmentation to justify their tracking approach formulated in the RL framework. I do not fully agree with their arguments. I think by solving the segmentation problem robustly, tracking starting from a seed point - thus deriving e.g. structural information on the geometry of relevant data - would be trivial. Therefore, I think the authors miss the comparison with pure segmentation strategies for solving the task that they show in their evaluation. In the recent years, there was a lot of work in enhancement of vascular structures using deep learning based methods, both in 2D (optical retinopathy) as well as 3D. For example DRIU (MICCAI 2016), but also other works following up on that have provided state-of-the-art benchmarks for segmentation, which are able to overcome missing structures. This body of work is totally ignored by the authors. I would not consider the Vaa3D algorithm as a fair, state-of-the-art comparison to show the benefits of the proposed method. In addition, the proposed method is not able to outperform the Vaa3D method on this dataset, so the question has to be asked, what are the practical implications of the propsed method? (As stated above, there are no other use cases demonstrated, to show more generic applicability.)\n\n- The authors argue about subpixel accuracy, however, this is misleading. In my opinion, using the continuous outputs of the predicted actor displacement locations, which are modelled by continous distributions, they solely are able to operate in a subpixel environment. However, from their experimental evaluation, where coverage is defined in a three pixel radius and mean errors are slightly below 2 pixels for their method, and most importantly, the segmentation ground truth is defined on the pixel grid, I would not consider the outcome of their algorithm as having subpixel accuracy. \n\n- Another criticism of the evaluation is the fact that the authors state that they can finetune their method on \"a very small amount of labelled data\" to improve their performance, compared with solely training from synthetic data. However, in their experiment they fine-tune on three quarters (15 out of 20) of the available labelled datasets. Therefore, I would consider this conclusion as incorrect.\n\nMinor issues:\n\n- I think repeating the six numerical values in Table 1 is redundant, since those are already stated in the text.\n- From the results of the methods and their discussion in the paper, it is not clear what a relevant error would be for the downstream tasks regarding the two-photon microscopy image dataset. Is Vaa3D already there, or is a higher accuracy still needed?\n- In Fig. 1 and its explaining texts, it is not clear what are the start and manually labelled end points of the trackers, and what the different colors in the middle subfigure mean. \n- In 3.1, I do not fully understand the details of the - very important - reward function, giving the negative of the base reward if an action means there is a 90 degree or greater change seems to be a heuristic, please explain that in more detail.", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}, {"pros": "- This paper solves a tracing problem of thin structures by foregoing segmentation.\n- It gives more insight into the applications of Deep Reinforcement Learning (DRL) in bio-medical imaging and its related applications.\n- Results show comparable performance with existing standard software (Vaa3D).\n- Method uses a stochastic policy to measure the trackers uncertainty given its entropy, compared to traditional trackers that do not include this measure. \n", "cons": "- Authors evaluated the tracker on synthetic and microscopy dataset (Bass et al, 2017); synthetic data is generated by simulation of single axons fitted by polynomial splines to random walks in 2D space with Gaussian noise. How can one evaluate the quality of the synthetic data? We recommend authors to include a brief discussion on quantitative evaluation of synthetic images to real images. How does one account for bias?\n- DRL trackers are trained on synthetic (32,000) and validated on 1,000 samples. Hyper-parameters tuning set is quite small. What is the impact of using varying sizes for training and validation on the 20 2D held out test set? Unless there is some reasoning behind using a very set for validation. (50 -50), (70-30) splits performance would be of value. This may answer the question what is considered a reasonable amount of data to use in such settings. \n- Lastly, in the second stage of experiments; authors mention the use of microscopy data (Bass et al) for testing. With reference to the work of Bass et al, there are 20 test and 80 training samples i.e. 100 tiff files. Is there a reason authors did not fine tune the tracker on the train-set (80) but rather used a k-fold method on the same test set with (15 \u2013 4 splits). Kindly clarify this issue, it would valuable to assess the performance of the tracker after finetuning on the set train-set. \n", "rating": "4: strong accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct", "special_issue": ["Special Issue Recommendation"], "oral_presentation": ["Consider for oral presentation"]}, {"pros": "The authors describe a method to track biological structures using deep reinforcement learning (DRL) when there is little to no training data available. In this paper, the use case is microscopy images of cortical axons in mice. They employ the PPO algorithm, which is a state-of-the-art reinforcement algorithm, to learn policy and value updates using CNN. The model is learnt in two phases: In the first, they train the networks on a huge (32, 000) synthetic dataset, which simulates microscopy imaging, to learn hyper-parameters. This model is tested on 1000 synthetic images and 20 microscopy images. In the second phase, they use 4-fold cross validation on the 20 microscopy images to fine-tune the networks. They compare the results to a state-of-the-art tracking software, Vaa3D. The two measures used for comparison are coverage and mean absolute error. The following are the main pros of the paper:\n1. The background description is clear and informative. They clearly distinguish tracking from segmentation. They clearly describe the reinforcement learning methods\n2. The authors show that a DRL that is trained to track structures on a synthetic dataset can be generalized to real world biological data with reasonable accuracy. The DRL model does not perform as well as Vaa3D. However, the results do show value of synthetic training datasets to train DRL problems.\n3. They introduce a new metric based on the entropy of their stochastic training process, which automatically indicates the uncertainty of the tracking.\n", "cons": "1. The main issue of the paper is the re-use of testing dataset in phase 2 to fine-tune the parameters. From the description, it seems like the 4-fold cross validation in phase two uses the same 20 images that were used to select the best hyper-parameters in phase 1. Given that n=20 is very low, the improvement seen in phase 2 might have been due to over-fitting, since model has already been hand picked for this data. An alternative could be to follow through the 4-fold cross validation in both steps 1 and 2 while holding the subsets constant. \n2. It is unclear what the authors mean when they state that this model approaches the performance of Vaa3D. The normal range of error measures such as coverage is not immediately evident. Perhaps comparison with a baseline method will be useful to show that this model is indeed close to state-of-the-art performance.\n3. The claims made in this paper are very broad. The authors claim that this method could be extended to any tracking task. Further experiments need to be performed to ascertain this claim.\n", "rating": "3: accept", "confidence": "2: The reviewer is fairly confident that the evaluation is correct"}], "comment_id": ["Hyxynd4oE4", "SyxRGlIjN4", "BJxn5bIsNV", "Bke772H-BE"], "comment_cdate": [1549645990680, 1549651990255, 1549652372306, 1550044187079], "comment_tcdate": [1549645990680, 1549651990255, 1549652372306, 1550044187079], "comment_tmdate": [1555946006566, 1555945993819, 1555945993605, 1555945959817], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2019/Conference/Paper13/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper13/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper13/Authors", "MIDL.io/2019/Conference"], ["MIDL.io/2019/Conference/Paper13/AnonReviewer2", "MIDL.io/2019/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Response to AnonReviewer1", "comment": "Thank you for your review. We address your points below:\n- The evaluation of synthetic images from generative models (learned or otherwise) is in general and unsolved problem (the best test is human judgement, e.g., mean opinion scores). Fine-tuning models is what should address the bias between synthetic and real data.\n- We have not tested different splits. The training and validation data comes from exactly the same generative process, but the testing data is completely different, so a larger validation set would not be more representative of the testing data in any case.\n- Only those 20 images were additionally labelled (a time-consuming process requiring an expert) with the centrelines needed for quantitatively testing our method, otherwise we would have used the entire dataset with a proper train-test split."}, {"title": "Response to AnonReviewer2", "comment": "Thank you for your review. We address your points below:\n- This is a first step towards building general trackers for biomedical images. Given that tracking in general is a difficult problem, we hope that we provide sufficient evidence that this is a viable approach. We hope to apply our method to other datasets in future work, which also requires manual centreline labelling from an expert in order to properly quantify our results. We also hope that others can improve on the results of this paper using alternative training, reward functions, or network architectures; the data will be provided to support this (and our image-by-image results are provided to encourage detailed comparisons).\n- We will cite the related work. We note that landmark detection using RL allows the agent to take any path to the landmark, while tracking requires the path to be as close as possible to the ground truth path of the underyling structure at every point.\n- We will include an extended discussion of segmentation methods. We note that the correspondence problem is not solved by segmentation - when there is a gap (which can occur with different imaging planes, noise, etc.), segmentation should not cover the missing or occluded area, whereas tracking should. An analagous situation is tracking a given pedestrian in a crowd of people.\n- The labels for the synthetic data are not derived from segmentation masks, but actually from the underlying continuous spline data. The goal is to then transfer this level of accuracy to real data. We note that unfortunately fine-tuning on real data would be suboptimal if the labels are indeed specified at the pixel level.\n- By \"very small amount\" we refer to the absolute number of samples, not the percentage of data. This is a tiny fraction of the amount of labelled samples available in many commonly used computer vision datasets.\n- Vaa3D has much poorer performance on anisotropic 3D data - which is where we hope to apply our method in future work.\n- Our method requires a start coordinate - an endpoint - (provided by a human) to begin tracking, and we terminate the environment if the tracker goes out of bounds or reaches any other endpoint. Vaa3D requires both a start and end coordinate, which makes its task more well-defined. For further comparison we have now also calculated results for another neuron tracking algorithm (APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree; Xiao, H. & Peng, H. (2013)) which has significantly worse coverage, but does not require any endpoints:\nMethod: Coverage / Accuracy\nAPP2: 81.81 / 0.79+0.642\nDRL: 86.25 / 1.81+0.124\nDRL (fine-tuned): 89.08 / 1.87+0.212\nVaa3D: 92.26 / 0.88+0.048\n- The negative reward for >= 90 degree turns is indeed a heuristic that helps with our tracking task. The biological structures of interest would not have such sharp turns/any sharp turns must therefore be broken down into a smaller series of steps."}, {"title": "Response to AnonReviewer3", "comment": "Thank you for your review. We address your points below:\n- We never use real data for choosing hyperparameters, only the held-out validation set of synthetic data.\n- Our method requires a start coordinate - an endpoint - (provided by a human) to begin tracking, and we terminate the environment if the tracker goes out of bounds or reaches any other endpoint. Vaa3D requires both a start and end coordinate, which makes its task more well-defined. For further comparison we have now also calculated results for another neuron tracking algorithm (APP2: automatic tracing of 3D neuron morphology based on hierarchical pruning of a gray-weighted image distance-tree; Xiao, H. & Peng, H. (2013)) which has significantly worse coverage, but does not require any endpoints:\nMethod: Coverage / Accuracy\nAPP2: 81.81 / 0.79+0.642\nDRL: 86.25 / 1.81+0.124\nDRL (fine-tuned): 89.08 / 1.87+0.212\nVaa3D: 92.26 / 0.88+0.048\n- This is a first step towards building general trackers for biomedical images. Given that tracking in general is a difficult problem, we hope that we provide sufficient evidence that this is a viable approach. We hope to apply our method to other datasets in future work, which also requires manual centreline labelling from an expert in order to properly quantify our results. We also hope that others can improve on the results of this paper using alternative training, reward functions, or network architectures; the data will be provided to support this (and our image-by-image results are provided to encourage detailed comparisons)."}, {"title": "Response to rebuttal", "comment": "I think the rebuttal has clarified and improved a number of issues. Given the arguments of the other reviewers and the outlook on improved and more extensive experimental evaluation, I switch my vote to acceptance."}], "comment_replyto": ["H1lugWKnQE", "H1eomCX57E", "HkxFYsPtmN", "SyxRGlIjN4"], "comment_url": ["https://openreview.net/forum?id=HJxrNvv0JN¬eId=Hyxynd4oE4", "https://openreview.net/forum?id=HJxrNvv0JN¬eId=SyxRGlIjN4", "https://openreview.net/forum?id=HJxrNvv0JN¬eId=BJxn5bIsNV", "https://openreview.net/forum?id=HJxrNvv0JN¬eId=Bke772H-BE"], "meta_review_cdate": 1551356585091, "meta_review_tcdate": 1551356585091, "meta_review_tmdate": 1551881979841, "meta_review_ddate ": null, "meta_review_title": "Acceptance Decision", "meta_review_metareview": "This paper investigates an interesting medical imaging understanding task; tracing thin structures in biomedical images. To mitigate the scarcity of labeled data, the tracing problem is formulated as a reinforcement learning problem where the agent learns to make an optimal sequence of decisions (or movements in the image space) to define the trace of the structure(s) of interest given a starting point per trace. This formulation makes \"train on synthetic data then test on real data\" possible. CNNs were used to avoid hand-engineered tracers and the entropy of the agent state was used as a measure of uncertainty (which is lacking in traditional methods). \n\nBased on the reviewers' comments and the respective authors' responses, the below summarizes identified key weaknesses that should be addressed in the camera ready, along with recommendations for future work.\n\n- More details about the quality of the synthetic data, e.g., how close its statistics are to the real data, and the reasoning behind the very small validation set (authors responses for the size of the validation set is not convincing, how large or small a validation set is should be related to how complex the distribution of the synthetic data rather than relying on the training and validation sets drawn from the same generative process.)\n\n- To be consistent with the main claim of the paper \"... different biomedical image datasets\", the evaluation needs to include other data sets to demonstrate the generality of the proposed approach. With the limited availability of the labeled data, qualitative results could be considered.\n\n- The related work section should cover recent reinforcement learning based methods for images analysis tasks (e.g. landmark detection) and pure segmentation strategies for thin structures.\n\n- Paper lacks some technical details including how the subvoxel accuracy is obtained and the intuition behind the heuristics considering in defining the reward function.\n\n- Comparisons with APP2 should be added.\n\n", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2019/Conference"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=HJxrNvv0JN¬eId=BygW2MLBIE"], "decision": "Accept"}