File size: 33,568 Bytes
fad35ef
1
{"forum": "9exoP7PDD3", "submission_url": "https://openreview.net/forum?id=UFnWZTbM5t", "submission_content": {"title": "Automated Labelling using an Attention model for Radiology reports of MRI scans (ALARM)", "authors": ["David Wood", "Emily Guilhem", "Antanas Montvila", "Thomas Varsavsky", "Martin Kiik", "Juveria Siddiqui", "Sina Kafiabadi", "Naveen Gadapa", "Aisha Al Busaidi", "Matt Townend", "Keena Patel", "Gareth Barker", "Sebastian Ourselin", "Jeremy Lynch", "James Cole", "Tom Booth"], "authorids": ["david.wood@kcl.ac.uk", "emily.guilhem@doctors.org.uk", "montvila.antanas@gmail.com", "thomas.varsavsky@kcl.ac.uk", "martin.kiik@kcl.ac.uk", "juveria.siddiqui1@nhs.net", "skafiabadi@nhs.net", "naveen.gadapa@nhs.net", "ayisha.albusaidi@nhs.net", "matthew.townend@wwl.nhs.uk", "keena.patel@kcl.ac.uk", "jeremy.lynch@nhs.net", "gareth.barker@kcl.ac.uk", "sebastien.ourselin@kcl.ac.uk", "james.cole@kcl.ac.uk", "thomas.booth@kcl.ac.uk"], "keywords": ["NLP", "BERT", "BioBERT", "automatic labelling"], "abstract": "Labelling large datasets for training high-capacity neural networks is a major obstacle to\nthe development of deep learning-based medical imaging applications. Here we present a\ntransformer-based network for magnetic resonance imaging (MRI) radiology report classification which automates this task by assigning image labels on the basis of free-text expert\nradiology reports. Our model\u2019s performance is comparable to that of an expert radiologist,\nand better than that of an expert physician, demonstrating the feasibility of this approach.\nWe make code available online for researchers to label their own MRI datasets for medical\nimaging applications.", "track": "full conference paper", "paperhash": "wood|automated_labelling_using_an_attention_model_for_radiology_reports_of_mri_scans_alarm", "pdf": "/pdf/306b41ef1734a089b139acb79b70fe453ac2adce.pdf", "_bibtex": "@inproceedings{\nwood2020automated,\ntitle={Automated Labelling using an Attention model for Radiology reports of {\\{}MRI{\\}} scans ({\\{}ALARM{\\}})},\nauthor={David Wood and Emily Guilhem and Antanas Montvila and Thomas Varsavsky and Martin Kiik and Juveria Siddiqui and Sina Kafiabadi and Naveen Gadapa and Aisha Al Busaidi and Matt Townend and Keena Patel and Gareth Barker and Sebastian Ourselin and Jeremy Lynch and James Cole and Tom Booth},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=UFnWZTbM5t}\n}"}, "submission_cdate": 1579955666029, "submission_tcdate": 1579955666029, "submission_tmdate": 1587172131560, "submission_ddate": null, "review_id": ["QoloUYjrPVu", "2ChNmT5JgT", "fMyEZ7F1D", "BPZNJXwv8N"], "review_url": ["https://openreview.net/forum?id=UFnWZTbM5t&noteId=QoloUYjrPVu", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=2ChNmT5JgT", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=fMyEZ7F1D", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=BPZNJXwv8N"], "review_cdate": [1584547931253, 1583944211271, 1583923166382, 1583417228528], "review_tcdate": [1584547931253, 1583944211271, 1583923166382, 1583417228528], "review_tmdate": [1585229913935, 1585229913318, 1585229912745, 1585229912222], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper89/AnonReviewer4"], ["MIDL.io/2020/Conference/Paper89/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper89/AnonReviewer1"], ["MIDL.io/2020/Conference/Paper89/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["9exoP7PDD3", "9exoP7PDD3", "9exoP7PDD3", "9exoP7PDD3"], "review_content": [{"title": "Very interesting application of NLP to classification of neuroradiology reports with impressive experiments", "paper_type": "both", "summary": "key ideas: In this work the authors present a very interesting application of NLP to classification of neuroradiology reports. Their contribution is to modify and \fre-tune the state-of-the-art BioBERT language model for the task of classifying radiological descriptions of historical MRI head scans into normal and abnormal as well as several subcategories . The\nclassi\fcation performance of the proposed model, Automated Labelling using an Attention model for Radiology reports (ALARM), is only marginally inferior to an experienced neuroradiologist for normal/abnormal classi\fcation.\n\nexperiments: The experiments are impressive. The data set is large, comprised of 3000 (randomly selected out of 126,556) radiology reports produced by expert neuroradiologists consisting of all adult (> 18 years old) MRI head examinations performed between 2008 and 2019 with 5-10 sentences of image interpretation. The 3000 reports were labelled by a team of neuroradiologists to generate reference standard labels. 2000 reports were independently labelled by two\nneuroradiologists for the presence or absence of any abnormality. On this coarse dataset the performance is excellent.\nAnother sub/classification different disease groups is made and here the performance is also good. \n\nsignificance: The clinical problem that the paper addresses is very important and many research units would have a direct use of this tool in order to extract clinical data for training or research purposes that is either normal or abnormal or in given sub categories.", "strengths": "- the importance of the application\n\n- the way the authors modify and \fre-tune the state-of-the-art BioBERT language model\n\n- the size of the dataset that was compiled\n\n- the performance on the normal vs abnormal task as well as the subcategory task\n", "weaknesses": "- The use of a comparison of an experienced neurologist and stroke physician vs. a neuroradiologist is somewhat strange to me; clinically speaking, either I have access to radiologists/neuroradiologists that can describe my scan or I do not. In the hospital setting, even in research, one wouldn't give scans to neurologists to describe them.\n\n- One caveat in the experiments is though that as the results a single run on the test set is given. This is common in deep learning applications, btu makes it slightly hard to guess how this would perform if trained slightly differently.", "questions_to_address_in_the_rebuttal": "- the authors should highlight their reasoning behind the use of a comparison of an experienced neurologist and stroke physician vs. a neuroradiologist\n\n- for people unfamiliar with NLP applications the section regarding the visualization of the word-level attention attention weights is a bit unclear. So darker color is more important, and there seems to be a lot of weights in the section regarding the spine. But why did you not exclude all the head and spine cases from the dataset as soon as you became aware of the presence of these dual examinations?\n\n- in principle, the authors broke anonymity by including a link to the main authors GitHub repo in the article: https://github.com/tomvars/sifter/ \\\nWhile this is a matter of discussion now, I highly encourage the authors to release their code and trained network for usage and further training in other sites. ", "rating": "4: Strong accept", "justification_of_rating": "The paper addresses a very important clinical issue and employs experiments with a considerable size data set of radiological reports of brain MRI scans that show a performance that is on par with a neuroradiologist.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Best Paper Award", "Oral"], "special_issue": "yes"}, {"title": "Interesting application paper on automated labelling of free-text radiology reports", "paper_type": "validation/application paper", "summary": "The paper proposes a method to automatically classify free-text radiology reports. The algorithm is built on top of a pretrained BioBERT model that converts text terms (\"tokens\") to high-dimensional representations. As a novelty in this work, an attention module is used to compute a weighted average of the high-dimensional representations of all tokens in the report. This average representation is passed to a 3-layer fully connected network to predict a label. The entire network is trained end-to-end. Several experiments are done on a dataset of 3000 labelled reports. The model is compared to a simplified version (with fixed pre-trained weights of the BioBERT model), to an existing approach word2vec, and to humans. The obtained prediction accuracies are impressive.", "strengths": "- Clearly written manuscript.\n- Relevant and interesting application.\n- Well-designed and carefully performed evaluation experiments.\n- Results are good.\n- Thanks to the attention module, the network is quite interpretable.\n", "weaknesses": "- The experiments did not evaluate the effect of adding the custom attention module on the performance. They only report in the Method section (3.2) that it led to improved performance, but no results are shown to confirm this.\n- The class sizes for the labelled data are not reported.", "questions_to_address_in_the_rebuttal": "- Please compare the performance with and without the custom attention module and show the results.", "detailed_comments": "- From section 3.1.2, it seems that each report is considered as a single sentence. (\"Each report was split into a list of integer identifiers....... sentence separation ('SEP') tokens,.., are appended to the beginning and end of this list\"). Do I interpret this correctly? If yes, please clarify why you didn't split the report in sentences. If no, please rephrase.\n- Please list the class sizes for the labelled data.", "rating": "4: Strong accept", "justification_of_rating": "This paper presents an interesting and original application, and shows very promising results on a large dataset. The method seems well-designed, and has some incremental novelty. This is a good application paper.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Oral"], "special_issue": "yes"}, {"title": "Highly relevant, although not directly on imaging data", "paper_type": "both", "summary": "This work presents a method for automated labeling of radiology report. The method used a standard pretrained classifier (BioBERT) which is extended by transformer-based model and a custom attention function. The task is split into two subtasks: binary and granular classification. Method's restuls have significantly improved over reference methods and experts.\n\n", "strengths": "This work is very important, for training automated medical image classifiers in the future without the need for manually labeling large datasets. \nThe dataset for this work is very useful and authors put a lot of effort into labelling these reports. The validation of the work is well done, results are convincing.\nPaper is well-written and structured. \nExamples of results are a valuable addition. ", "weaknesses": "Although this work is not using imaging data, this is a future application of the method. \nIt is not completely clear how the granular classification tasks are defined. For example Fazekas is a score system, is the classifier Fazekas normal yes/no, or predicting the exact score?\nNo further weaknesses. ", "rating": "4: Strong accept", "justification_of_rating": "Although this work is not using imaging data, is has a very strong connection to it and therefore I find this work highly relevant for MIDL. The validation of the work is well done, results are convincing.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Oral"], "special_issue": "no"}, {"title": "Relevant contribution to an increasingly pressing area", "paper_type": "validation/application paper", "summary": "The authors propose to and show how to turn a state-of-the-art NLP model, BioBERT, into a tool to solve a basic, but relevant text classification task for free-text radiology reports written (dictated) for head MRI exams. To this end, from a large collection of reports, a total of 3.000 reports were expert-curated into 2 classes for 2/3 of the cases, and into a multi-hot vector for five classes for the remaining 1.000 cases. \n\nThe results show an improvement over previous research, competitive with or outperforming a trained human observer on a limited set of test cases. The authors use BioBERT, a NLP tool based on BERT, to turn a report into a richer representation that can be run through a word-level attention mechanism to inspect the words BioBERT attended to most. The result of this attention module is then run through a dense NN for classification, which was also trained by the authors.\n\nThe paper shows that limited efforts and modest hardware is sufficient to yield a valuable text classifier that even allows a certain level of interpretability, and that can be used to quickly crawl large report databases.", "strengths": "The authors justify why they deviate from the proposed way of fine-tuning BERT-based models for classification, by reporting improved performance if not the result on the [CLS] token alone is used for classification, but the full embedded report representation, run through another self-trained attention module. This is a thought that seems to be justified by the success, even though no theoretical explanation or numerical validation is given.\n\nThe paper is clearly structured, consistent in the writing, and sound in its methodology. It presents a useful development, and convincing results.\nThe evaluation against the closest existing tools (though they are not based on a comparable technology) is augmented by a comparison with a trained human observer, which is not often explicitly done in this research. \nThe promised release of a text data analysis tool based on tSNE adds to the practical usefulness of the work.\n", "weaknesses": "The for me most significant lack is a clear description how \n* the ground truth was established;\n* the human observer performance seen in the comparisons was assessed against this GT. I would have assumed that there is no performance difference between the ground truth (established by trained human observers, after all) and the rating of another long-trained (as the authors point out) human observer. Because there _is_ a strong difference, there must be a reason why, which I would really like to know.\n\nThe further comments may serve as suggestions for further work. Perhaps it might even be possible to include an implementation of the first point before final submission, if the authors agree.\n\nThe methodological contribution (word-level attention visualization) is not very strong, as BERT by itself is built to facilitate this type of introspection, and it has been used in many subsequent works. Also, the attention module's output is input to the 3-layer classifier network that does the actual \"judgement\", but is not explained or utilized in the explanation (or e.g. used to derive decision uncertainty, which would be a very simple addition). \n\nAlso, in my opinion in particular the explanations shown in the two false negative/false positive cases show the major difficulty with some types of explainability mechanisms like the one presented: they might help to elucidate WHY a DNN was wrong, once you know it was, but it does not help to assess IF it was wrong. To achieve this, one way might be to assign not only attention to words, but also a certainty metric, so that the network can be trained to be less certain when it is wrong (compare e.g. Mukhoti/Gal 2018). ", "questions_to_address_in_the_rebuttal": "See the points in the \"weaknesses\" section above, please!", "detailed_comments": "* In Sec 4. l. 4, change \"preformed\" --> \"performed\"\n* There is a textual reference to Tables 1 and 2, where the word \"Tables\" is missing (Sec. 4.1)\n* The GitHub link does not exist yet, but it gives away the origin of the contribution (breaks anonymisation)", "rating": "3: Weak accept", "justification_of_rating": "A slight lack of justification for the particular setup with a new attention module and subsequent classifier network and the unexplained strong difference between human observer and ground truth make me hope that a \"weak accept\" encourages the authors to improve the submission.\nIn this case, and if the model and training setup as well as the data annotation tool will indeed be released, I can imagine that the interest of the community might be high enough to warrant an upgrade to an oral presentation.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Poster"], "special_issue": "no"}], "comment_id": ["4jw7HrfptEQ", "3wlGOKH5Ho3", "AEPMiewR2Fp", "0fAfiT7x6Cj", "vRzaTvhbpGk", "kTCz9zeoknP", "iMLiIzjxWCz"], "comment_cdate": [1585938667520, 1585341474880, 1585339842494, 1585339764992, 1585339678737, 1585339627361, 1585339550160], "comment_tcdate": [1585938667520, 1585341474880, 1585339842494, 1585339764992, 1585339678737, 1585339627361, 1585339550160], "comment_tmdate": [1585938667520, 1585341474880, 1585339842494, 1585339764992, 1585339678737, 1585339627361, 1585339550160], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2020/Conference/Paper89/AnonReviewer3", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper89/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper89/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper89/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper89/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper89/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper89/Authors", "MIDL.io/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Thank you for clarification", "comment": "I thank the authors for the clarification. No change to initial rating."}, {"title": "Thank you for the comments, please see our reply below", "comment": "RESPONSE 1 We thank the reviewer for their comments. Regarding how the ground truth labels were established, we thank the reviewer for asking for clarification. We have made this clearer in the manuscript, adding a detailed description of how the categories are defined in the appendix (the neuroradiology granular label classification rules).\n\n- Regarding why there exists a performance difference between the ground truth (established by human observers) and that of a trained human observer, in 3.1.1 we indicate that the ground truth was generated by three neuroradiologists. Because there was initial unanimous agreement of 95.3%, 4.7% of reports had to be labelled following a consensus with all three neuroradiologists to give the ground truth (we refer to this as the \u2018reference standard\u2019 in the manuscript). Thus, the ground truth combines the expertise of several neuroradiologists. Conversely, the human observer (a fourth neuroradiologist), although trained in the same way, did not confer with these three neuroradiologists when classifying reports. As such, the performance was marginally inferior. (please also see response to reviewer 1 for discussion about the use of a stroke doctor/neurologist\n\n- Whilst this hopefully answers the question of why the performance is different, another question might be \u201cwhy did we chose to compare the algorithm to a single blinded observer in addition, rather than against the ground truth alone?\u201d It is well known in medical data classification tasks that the rate limiting step is typically labelling by clinicians. We wanted to determine whether a single experienced neuroradiologist would be suitable for such a task, rather than using a labour-intensive consensus of three experienced neuroradiologists.\n\n- Furthermore, our approach to a specific classification is similar to that used recently where the classification from a group of radiologists was the \u2018reference standard\u2019 and comparison was made to an individual radiologist, and therefore appears to be a sensible strategy (Ardila, D., Kiraly, A.P., Bharadwaj, S. et al. End-to-end lung cancer screening with three-dimensional deep learning on low-dose chest computed tomography. Nat Med 25, 954\u2013961 (2019).). However, we even improved the previous approach of determining the \u2018reference standard\u2019 for our task by using the consensus results of three radiologists rather than the \u2018average\u2019 result.\n\n- RESPONSE 2 We also thank the reviewer for highlighting some typographical errors \u2013 we have changed the manuscript to correct these. Thank you.\n\n- RESPONSE 3 Regarding the reviewer's point that 'the attention module's output is input to the 3-layer classifier network that does the actual \"judgement\", but is not explained or utilized in the explanation', we thank the reviewer for this comment. We take the reviewer's point that attention visualizations are an imperfect  form of model explainability; our inclusion of a custom attention module is ultimately a result of experiments which demonstrate improved performance. On that note we also thank the reviewer for highlighting that we don't explicitly state the performance of baselines using the 'CLS' token, or a simple average word embedding. We have now updated the manuscript to include the results using these baselines - the attention module improves accuracy by around 3% on the course classification task. Following an influential work (Yang et al. 2016, Hierarchical attention networks for document classification), we simply visualize the attention weights to confirm that qualitatively informative words are being used by the model for classification to help shed some light on model behavior. Ultimately, this paper is about accurate labeling of large scale imaging datasets which is a crucial bottleneck rather than examining DNN decisions.\n\nWe also thank the reviewer for suggesting an avenue for further work, namely that our model might benefit from the inclusion of certainty metric like that in Mukhoti/Gal 2018. We were not aware of this interesting work and will try to pursue this in future. \n\nWe hope that our description of why there exists a  difference between the human observer performance and the ground truth, as well as our intention to make all our code and the tSNE annotation tool available to other researchers upon acceptance so that they can label their own large hospital datasets will mean that the paper will warrant an upgrade to oral as the reviewer kindly suggested. We thank the reviewer again for their comments which have help improved our submission."}, {"title": "thank you for the comments, please see our reply below", "comment": "RESPONSE 1 We thank the reviewer for their comments. Regarding the fact that our work isn\u2019t directly on imaging data, we believe that it directly pertains to a critical bottleneck in the \u2018deep learning for image analysis\u2019 pipeline, namely the difficulty obtaining large labelled datasets of medical images, and we thank the reviewer for highlighting the connection we are emphasising.\n\n- Regarding the definition of the granular classification categories, we thank the reviewer for asking for clarification here. We have added the neuroradiology granular label classification rules as an appendix to show how the granular classification tasks are defined. This neuroradiology label classification was refined after 6 neuroradiology meetings, each following a practice classification task of 100 reports. We have attempted to emulate the decision making of a neuroradiologist. Note that a finding that might generate a referral to a multidisciplinary meeting for clarification would be included within that category e.g. an arachnoid cyst may be ignored in clinical practice, but we included it in the \u201cmass\u201d granular category. Thus our classification is sensitive to ensure patient safety.\n\n- For the example of Fazekas that you mention, these white matter changes are classified according to Fazekas (as referenced in section 3.1.1 F Fazekas, John Chawluk, Abass Alavi, H.I. Hurtig, and R.A. Zimmerman. Mr signal abnormalities at 1.5 t in alzheimer\u2019s dementia and normal aging. AJR. American journal of roentgenology, 149:351\u20136, 08 1987. doi: 10.2214/ajr.149.2.351. ):\n\n1. Mild\u202f- punctate WMLs: Fazekas I\n\n2. Moderate\u202f- confluent WMLs: Fazekas II\n\n3. Severe\u202f- extensive confluent WMLs: Fazekas III\n\n- To create a binary categorical variable from this system, if the report was unsure/normal or mild this would NOT be categorized as \u201c0\u201d as this never requires treatment for cardiovascular risk factors. However, if it described moderate or severe WMLs this would be categorized as \u201c\u201d1\u201d as these cases sometimes require treatment for cardiovascular risk factors."}, {"title": "Thank you for the comments, please see our reply", "comment": "RESPONSE 1 We thank the reviewer for their comments. Regarding the performance of our model without the custom attention module, we thank the reviewer for asking for clarification about this. We have updated the manuscript with results using the CLS token \u2013 the suggested technique form the original BERT paper - as well as with a simple average of contextualized word embeddings i.e. the attention weights for each word are equal an given by 1/length_sentence. Our attention network outperforms both by ~3%.\n\n- RESPONSE 2 Regarding the use of the \u2018SEP\u2019 token, thank you for asking for clarification about this. The reviewer is correct \u2013 the SEP token was used to separate sentences - we have rephrased the manuscript to make this clearer.\n\n- RESPONSE 3 Regarding the class sizes for the labelled data, we thank the reviewer for highlighting this. We have now included these is the manuscript."}, {"title": "response 3", "comment": "RESPONSE 3 Regarding making our code available online, we thank the reviewer for suggesting this and apologise for including a github link. This was an oversight on our part as it didn\u2019t occur to us that this breached anonymity, but it hopefully demonstrates that we are committed to open source science. Our code, as well as the automatic labelling tool, will be made available for other researchers to label their imaging datasets upon publication."}, {"title": "Response 2", "comment": "RESPONSE 2 We thank the reviewer for their comment. Regarding the point about retaining an \u2018outlier\u2019 examination, in this case a head and a spine erroneously reported together, we thank the reviewer for asking for a clearer explanation for the rationale and we have made this clearer in the manuscript.\n\n- In section 4.1 we say that \u201csuch combined reports were very rare as the number of head and spine examinations occurring at the same time was small (less than 0.5% of our data)\u201d.\n\n- We also say \u201cbecause we extracted all head examinations from a real-world hospital CRIS, occasionally the neuroradiologist who reported the original scan decided to include a spine report in the section dedicated for head reports (as opposed to a section for the spine).\u201d\n\n- In other words the neuroradiologist would typically report heads in the section dedicated for head reports. What happened in this case is the neuroradiologist deviated from hospital protocol i.e. in a tiny fraction of this 0.5% (exact percentage from120,000 unknown).\n\n- We agree, our classification results could be improved even further by removing ALL the 0.5% of examinations where the head and spine examinations were performed concurrently, which would remove the outlier. However, our concept had always been to use ALL written head reports on an \u201cintention-to-report\u201d basis. Many models in deep learning suffer from domain shift when an in-sample hold out test set or, more commonly, in an out-of-sample hold out test set is tested. We therefore wanted our algorithm to be as generalisable as possible by using all real world data and had elected to use ALL head reports from a real-world hospital CRIS."}, {"title": "Thank you for the comments, please see our responses", "comment": ".- RESPONSE 1 We thank the reviewer for their comments. Regarding the use of a neurologist, we thank the reviewer for asking for clarification here and we have made this clearer in the manuscript. For the task of coarse labelling i.e. seeing whether a report contains an abnormality or not, we could have compared our algorithm performance to the consensus of three experienced neuroradiologists alone - which is our reference standard. It would have been sufficient to confine the performance to this comparison alone. However, the large team of clinicians working on this project wanted to additionally compare to an experienced neurologist (and stroke physician) for the following reasons.\n\n1. It is well known in medical data classification tasks that the rate limiting step is typically labelling by clinicians. We wanted to determine whether an experienced neurologist (and stroke physician) would be suitable for such a task as there are less neuroradiologists than neurologists or stroke physicians by a ratio of 1:4. To be clear, this doesn\u2019t mean that the neurologist would assess the scan, only the report describing the scan\n\n2. In most countries including anonymous [country], the experienced neurologist (and stroke physician) orders the scan from his/her neuroradiology colleagues, and once the scan is completed, hours-weeks later interprets the report produced by the neuroradiology colleague. In most countries including anonymous [country], neurologists (and stroke physicians) have outpatient clinics (or inpatient ward rounds) and will frequently interpret the report held on the electronic patient records during the face-to-face patient consultation. This must be the interpretation of the neurologist (and stroke physician) alone because their neuroradiology colleagues are not accessible at the time of the outpatient clinic (or inpatient ward rounds).\n\n3. The performance of the algorithm can be put into clinical perspective. It is implicit that such an algorithm might also be a useful tool for an experienced neurologist (and stroke physician) to simply understand whether the report shows that the scan is normal or abnormal \u2013 after all, we have shown that it is challenging for such an expert neurologist (and stroke physician) to even determine whether the report shows that the scan is normal or abnormal.\n\n- Notes: To reduce the chance that the allocation of normal or abnormal by our experienced neuroradiologists somehow differed to what is considered normal or abnormal in the clinic by an experienced neurologist (and stroke physician), the neuroradiology team taught the neurologist (and stroke physician) a set of easy-to-follow rules to reduce any ambiguity, over a six month period in the run-up to their classification task (as mentioned in section 4).\n\n- Furthermore, our approach is similar to that used recently where experienced neuroradiologists were the \u2018reference standard\u2019 and comparison was made to someone who is not a neuroradiologist (e.g. bleed vs no bleed on CT head deep learning classification task \u2013 reference Kuo W,\u202f H\u04d3ne C,\u202f Mukherjee P,\u202fMalik J, Yuh EL PNAS\u202fNovember 5, 2019\u202f116\u202f(45)\u202f22737-22745.). Therefore our approach appears to be a sensible strategy.\n\n- In summary, for the task of coarse labelling i.e. seeing whether a report contains an abnormality or not, we could have compared our algorithm performance to the consensus of experienced neuroradiologists alone - which is our reference standard. However, we believe it is of vital clinical importance to also compare the performance to an experienced neurologist (and stroke physician).\n\n"}], "comment_replyto": ["0fAfiT7x6Cj", "BPZNJXwv8N", "fMyEZ7F1D", "2ChNmT5JgT", "QoloUYjrPVu", "QoloUYjrPVu", "QoloUYjrPVu"], "comment_url": ["https://openreview.net/forum?id=UFnWZTbM5t&noteId=4jw7HrfptEQ", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=3wlGOKH5Ho3", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=AEPMiewR2Fp", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=0fAfiT7x6Cj", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=vRzaTvhbpGk", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=kTCz9zeoknP", "https://openreview.net/forum?id=UFnWZTbM5t&noteId=iMLiIzjxWCz"], "meta_review_cdate": 1586253833302, "meta_review_tcdate": 1586253833302, "meta_review_tmdate": 1586253833302, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper89 by AreaChair1", "meta_review_metareview": "All reviewers recommend acceptance of the paper and the authors tried to address any remaining comments. I also think this is a topic with a lot of interest from the MIDL community. ", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper89/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=UFnWZTbM5t&noteId=0xZNUozedz6"], "decision": "accept"}