File size: 26,978 Bytes
fad35ef
1
{"forum": "0IeI8QS8N6", "submission_url": "https://openreview.net/forum?id=NsBz-JrKr", "submission_content": {"track": "full conference paper", "TL;DR": "Proposed and designed a new Laplacian pyramid-based multi-scale complex neural network learning framework for fast MR imaging.", "keywords": ["Deep learning", "complex convolution", "Laplacian pyramid decomposition"], "abstract": "A Laplacian pyramid-based complex neural network, CLP-Net, is proposed to reconstruct high-quality magnetic resonance images from undersampled k-space data. Specifically, three major contributions have been made: 1) A new framework has been proposed to explore the encouraging multi-scale properties of Laplacian pyramid decomposition; 2) A cascaded multi-scale network architecture with complex convolutions has been designed under the proposed framework; 3) Experimental validations on an open source dataset fastMRI demonstrate the encouraging properties of the proposed method in preserving image edges and fine textures.", "authors": ["Haoyun Liang", "Yu Gong", "Hoel Kervadec", "Jing Yuan", "Hairong Zheng", "Shanshan Wang"], "authorids": ["hy.liang1@siat.ac.cn", "yu.gong@siat.ac.cn", "hoel.kervadec.1@etsmtl.net", "jyuan@xidian.edu.cn", "hr.zheng@siat.ac.cn", "ss.wang@siat.ac.cn"], "paper_type": "both", "title": "Laplacian pyramid-based complex neural network learning for fast MR imaging", "paperhash": "liang|laplacian_pyramidbased_complex_neural_network_learning_for_fast_mr_imaging", "pdf": "/pdf/d54f4e3ad309df47d710a5baa22ac15bd3923549.pdf", "_bibtex": "@inproceedings{\nliang2020laplacian,\ntitle={Laplacian pyramid based complex neural network learning for fast {\\{}MR{\\}} imaging},\nauthor={Haoyun Liang and Yu Gong and Hoel Kervadec and Jing Yuan and Hairong Zheng and Shanshan Wang},\nbooktitle={Medical Imaging with Deep Learning},\nyear={2020},\nurl={https://openreview.net/forum?id=NsBz-JrKr}\n}"}, "submission_cdate": 1579955788024, "submission_tcdate": 1579955788024, "submission_tmdate": 1588177538167, "submission_ddate": null, "review_id": ["zhJVQxBp59Q", "sHGPY8r449I", "Izk7FSOeO", "M_j6ixHnuw"], "review_url": ["https://openreview.net/forum?id=NsBz-JrKr&noteId=zhJVQxBp59Q", "https://openreview.net/forum?id=NsBz-JrKr&noteId=sHGPY8r449I", "https://openreview.net/forum?id=NsBz-JrKr&noteId=Izk7FSOeO", "https://openreview.net/forum?id=NsBz-JrKr&noteId=M_j6ixHnuw"], "review_cdate": [1584653512455, 1584635059324, 1584226721694, 1583645074041], "review_tcdate": [1584653512455, 1584635059324, 1584226721694, 1583645074041], "review_tmdate": [1585229773969, 1585229773466, 1585229772966, 1585229772443], "review_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"]], "review_writers": [["MIDL.io/2020/Conference/Paper315/AnonReviewer6"], ["MIDL.io/2020/Conference/Paper315/AnonReviewer5"], ["MIDL.io/2020/Conference/Paper315/AnonReviewer3"], ["MIDL.io/2020/Conference/Paper315/AnonReviewer2"]], "review_reply_count": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "review_replyto": ["0IeI8QS8N6", "0IeI8QS8N6", "0IeI8QS8N6", "0IeI8QS8N6"], "review_content": [{"title": "Lacks critical detail about methodology", "paper_type": "validation/application paper", "summary": "The paper proposes a laplacian pyramid-based CNN for reconstruction of MR images from undersampled k-space data to accelerate MRI acquisition. The authors have demonstrated using a Laplacian pyramid-based scheme to recover undersampled k-space data and reconstruct MR images. Results with other state-of-the-art methods show an improvement in PSNR and SSIM on a brain MRI dataset. ", "strengths": "An approach to reconstruction undersampled MR images and accelerate MR imaging, which results in higher PSNR and SSIMs on a brain MRI dataset, compared to other state-of-the-art approaches like U-Net, Cascade-Net, and PD-Net. ", "weaknesses": "The paper lacks critical details on the network architecture---the loss function used, the architecture of convolutional layers, a well-structured and well-formed figure representation the network, the cascaded structure of the proposed architecture, the datasets used---to name a few, as well as an ablation study, both on the width of the CNN as well as the cascaded architecture. While the results do indeed beat state-of-the-art, I believe it is not straightforward to reproduce the results from the manuscript in its current form. \nThe pipeline also includes an inverse Fourier transform and it is not clear whether the entire network is trained with backpropagation, and if so, how. ", "questions_to_address_in_the_rebuttal": "A more rigorous description of the overall method, with proper descriptions of the model and loss functions is imperative. A more detailed description of the dataset is also necessary, as the methodology figure seems to use knee MRI, but results are demonstrated on brain MRI.  ", "rating": "2: Weak reject", "justification_of_rating": "While the results do beat state-of-the-art, I believe the manuscript can be accepted after some critical revision in terms of description of methology and datasets. In my opinion, reproducibility of a paper strengthens the results of the paper. ", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": [], "special_issue": "no"}, {"title": "Interesting approach but experiments lack conviction", "paper_type": "both", "summary": "The study presents a new framework for zero-filled MRI reconstruction. The framework is built on a two-way backbone, simultaneously processing from its Laplacian pyramid decomposition and a downsampled version of the input signal. It is built in an end-to-end deep learning model involving complex convolutions. Experiments are performed on a in-house dataset with 2D images from 22 patients, on which the proposed approach is reported with the best performances compared to state-of-the-art community approaches.", "strengths": "Substantial effort is done to formulate the studied problem. There are great visual supports including deep learning architecture pipeline and sample results. Authors also make efforts to explain the theorical bases of the components used in their approach, such as complex convolutions or complex Laplacian pyramid decomposition. Experiments report conventional measures for this task.", "weaknesses": "From an evaluation perspective, all results are derived from an in-house dataset which makes the paper unreproducible. Most importantly, there does not seem to be any validation set, while the paper proposes a new architecture (i.e. an optimization hyper-parameter). The paper suggests that experiments are on 2D images, while competing approaches such as KIKI-Net report experiments on 3D images. If experiments are on 2D, authors should specify whether the training/testing split was done patient-wise or if images from a patient can be in both sets.", "questions_to_address_in_the_rebuttal": "- How have the training set and testing sets been used ? The split between training/testing should be explicited, along with total number of images.\n- There are public datasets suited for the task (cf other community works), which would produce insightful measures comparable to other reported performance and reproducible. Would it be possible to perform experiments on such public dataset, or maybe report generalization performance of the proposed approach?", "detailed_comments": "- figures with visual examples could be enhanced with zoomed-in region with results for all benchmarked methods; also, both figures with samples could be merged\n- acronym \"dc layer\" is never introduced\n- 2.3, when introducing the notation with cos and sin, \"where Theta and Omega\" should be \"|Omega|\"\n- it is not clear how the unet is implemented with complex inputs", "rating": "2: Weak reject", "justification_of_rating": "The major drawback of the paper is the experiment evidence from the designed setup: experiments are in 2D and on in-house data including few patients, with no validation set. Although state-of-the-art methods are reimplemented, there are no direct comparison on publicly available data.", "confidence": "3: The reviewer is fairly confident that the evaluation is correct", "recommendation": [], "special_issue": "no"}, {"title": "Office review", "paper_type": "methodological development", "summary": "This paper proposes a Laplacian pyramid based complex neural network for fast MRI imaging. The proposed deep networks contains two important component: Laplacian pyramid and the complex convolution, which are existing work. The authors compares the proposed network with several existing methods and shows its better performance.", "strengths": "The paper tries to improve MRI reconstruction with Laplacian pyramid and complex convolution. The paper compares the proposed methods with several existing methods. The paper is well organized and presented.", "weaknesses": "1. The paper fails to cover a series of recent work on MRI reconstruction with Generative Adversarial Networks including:\n*Shende, Priyanka, Mahesh Pawar, and Sandeep Kakde. \"A Brief Review on: MRI Images Reconstruction using GAN.\" 2019 International Conference on Communication and Signal Processing (ICCSP). IEEE, 2019.\n*Quan, Tran Minh, Thanh Nguyen-Duc, and Won-Ki Jeong. \"Compressed sensing MRI reconstruction using a generative adversarial network with a cyclic loss.\" IEEE transactions on medical imaging 37.6 (2018): 1488-1497.\n\nI believe GAN-based MRI reconstruction could alleviate the blurry issue in reconstruction, but the authors have not included any reference, discussion or comparison with such methods.\n\n2. The authors do not give any ablation study of the proposed model. Why combining Laplacian pyramid and the complex convolution could improve the performance? Which of these two components play an important role and whether both components improve the performance?\n", "rating": "2: Weak reject", "justification_of_rating": "The paper fails to mention a series of related research with GAN on MRI imaging. No ablation study is given to analyze the proposed model. Then I could not give any rating beyond weak reject, unless the authors improve the paper.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": [], "special_issue": "no"}, {"title": "Review for \"Laplacian pyramid based complex neural network learning for fast MR imaging\"", "paper_type": "both", "summary": "This paper proposes to learn Laplacian pyramid based complex neural network (CLP-Net) for high-quality image reconstruction from undersampled k-space data. The goal is to accelerate MR imaging. The experimental results on in vivo datasets show that the proposed method obtains better reconstruction performance than three state-of-the-art methods.", "strengths": "1) a new framework for MR reconstruction from undersampled k-space data has been proposed by exploring the encouraging multi-scale properties of Laplacian pyramid decomposition; \n\n2) a cascaded multiscale network architecture with complex convolution has been designed under the proposed framewor \n\n3) The experimental validations on in vivo datasets have shown higher potential of this method in preserving the edges and fine textures when compared to other state-of-the-art methods.", "weaknesses": "No notable weakness identified. \nI support this paper due to the novel cascaded multiscale network architecture using complex convolutions, and its strong performance on in vivo datasets in preserving the edges and fine textures.", "rating": "3: Weak accept", "justification_of_rating": "See above. I support this paper due to the novel cascaded multiscale network architecture using complex convolutions, and its strong performance on in vivo datasets in preserving the edges and fine textures.", "confidence": "4: The reviewer is confident but not absolutely certain that the evaluation is correct", "recommendation": ["Poster"], "special_issue": "yes"}], "comment_id": ["iB7pZesVfDJ", "AVR2VWl4O6h", "YldsSXgDMLo", "ufC2ZL2s4AG", "DLBAU_nkBh4", "ebgJ4I-CqRR", "AW0oCZ7ps2g", "KgoCfJFRjgS"], "comment_cdate": [1586201900026, 1586024802270, 1585939400200, 1585639922382, 1585534876167, 1585314092435, 1585314814271, 1585313565557], "comment_tcdate": [1586201900026, 1586024802270, 1585939400200, 1585639922382, 1585534876167, 1585314092435, 1585314814271, 1585313565557], "comment_tmdate": [1586201900026, 1586024802270, 1585939400200, 1585639922382, 1585535015844, 1585496696404, 1585496684562, 1585496247636], "comment_readers": [["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"], ["everyone"]], "comment_writers": [["MIDL.io/2020/Conference/Paper315/AnonReviewer6", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper315/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper315/AnonReviewer6", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper315/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper315/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper315/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper315/Authors", "MIDL.io/2020/Conference"], ["MIDL.io/2020/Conference/Paper315/Authors", "MIDL.io/2020/Conference"]], "comment_reply_content": [{"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}, {"replyCount": 0}], "comment_content": [{"title": "Rating change", "comment": "Thank you for your detailed response. I think it answers several of my initial concerns. I would suggest the authors to add these explanations in some form to the manuscript to make it much clearer. \nGiven the responses of the authors, I would also like to change my rating to weak accept, leaning more towards borderline (as there does not seem to be a borderline rating). Results on more datasets, more time motivating the problem and the approach, as well as revising the language of the paper would have helped increase this rating. \n\nPS: Please clarify also the \"no laplacian pyramid\" version in the revised paper if accepted. "}, {"title": "We have supplemented more details.", "comment": "Many thanks for your comments. \n\n1. Shuffle-down and Shuffle-up were proposed in the paper \"Real-Time Single Image and Video Super-Resolution Using an Efficient Sub-Pixel Convolutional Neural Network\" by Shi et. al (2016).  Shuffle-up rearranges elements in a tensor of shape (Batch, Channel*r*r, Height, Weight) to a tensor of shape (Batch, Channel, Height*r, Weight*r), and r is the upscale factor.  Shuffle-down rearranges elements in a tensor of shape (Batch, Channel, Height*r, Width*r) to a tensor of shape (Batch, Channel*r*r, Height, Width), and r is the downscale factor.  And our pseudocode is here: \n\ndef ComplexShuffleDown(inputs, factor):\n        batch, channel_in, height_in, width_in, complex_channel = inputs.size()\n        channel_out = channel_in * factor** 2\n        height_out = height_in // factor\n        width_out = width_in // factor\n        output = inputs.view(batch, channel_in, height_out , factor, width_out, factor, complex_channel)\n        output = output.permute(0, 1, 5, 3, 2, 4, 6)\n        output = output.view(batch, channel_out , height_out , width_out, complex_channel)\n        return output\n\ndef ComplexShuffleUp(inputs, factor):\n        batch, channel_in, height_in, width_in, complex_channel = inputs.size()\n        channel_out = channel_in // (factor ** 2)\n        height_out = height_in * factor\n        width_out = width_in * factor\n        output = x.view(batch, channel_out, factro, factor, height_in , width_in, complex_channel)\n        output = output.permute(0, 1, 4, 3, 5, 2, 6)\n        output = output.view(batch, channel_out , height_out , width_out, complex_channel)\n        return output\n\n2.  We didn't use L2 loss. The L1 loss we used is a built-in function in Pytorch, so we don't need to compute it ourself. The L1 loss is applied at the outputs of each block in the cascade, that means the predictions and the targets are the reconstructed MR images in each cascade and the fully sampled MR images, respectively. \n\n3. The reconstruction performance of CLP-net is the best in this paper. As\nwe can see in Figures 4, more texture details, and less noise prove that CLP-net has excellent\nreconstruction ability. Compared with the reconstruction results of the other networks, the\ntissue in that of CLP-net is a clearer reference to the zoomed ROI in Figures 4. It proves\nthat there is no excessive noise reduction in the reconstruction process of CLP-net. CLP-net improves on the disadvantage of being too smooth and provides more realistic results.\n\n4. From the CLP-Net architecture, there are two Laplacian error maps and one Gaussian map, they have different sizes. Each of them will be performed the per-pixel convolution. So, in equation (6), the pixel(m,n) are in the above maps.\n\n5. The inverse Fourier transform actually is a part of the dc layer (data consistency layer) which was proposed in the paper \"A Deep Cascade of Convolutional Neural Networks for Dynamic MR Image Reconstruction\". The whole pipeline consist of several (we set it to 5) cascaded structures, each cascaded structure consist of a CLP-Net and a dc layer.  And we will put the undersampled MR images into the CLP-Net and it will output the reconstructed MR images. Then the reconstructed MR images will be put into the dc layer with the mask and the undersampled k-space data. More specifically, first, the input reconstructed MR images will be performed a Fourier Transform and multiplied by a inverse mask afterwards. The inverse mask is obtained by (1 - mask). After that, we can obtain the reconstructed k-space data by adding the undersampled k-space data on the results of the previous step, and we will perform inverse Fourier Transform on the reconstructed k-space data to produce the final reconstructed MR images.  The dc layer can preserve more real details.\n\n6.  \n#                                           #SSIM     #PSNR\nLaplacian pyramid            #0.633     #28.19\nno Laplacian pyramid      #0.613     #27.33\n\n#cascade        #SSIM    #PSNR    #GFLOPS\n#cascade=3    #0.624    #28.02    #117\n#cadcade=4    #0.628   #28.13     #156\n#cascade=5     #0.633   #28.19     #196\n#cascade=6     #0.635   #28.22     #235\n\nFrom the results of ablation study, we can see that the cascaded structure can take advantage of the Laplacian pyramid several times, the reconstructed MR images will be more precise each time."}, {"title": "Needs more details", "comment": "I thank the authors for their response. While concerns regarding datasets and ablation study have been addressed, I regret that concrete information on the pipeline, loss function, convolutional models still hasn't been provided in the rebuttal. \n\n* Details on the shuffle down- and up-sampling could be provided here instead of the revised manuscript. \n* The loss function is L1, but how is it computed? What are predictions and the targets? If it is the reconstruction loss, was anything done to account for smooth and unrealistic results generated by L1 and L2 reconstruction losses? Is the loss applied at only the final output, or at the outputs of each block in the cascade?\n\nThere are several details that are still unclear: \n* In equation (6), I_neighbour(m,n) is the neighbourhood around pixel (m,n), but pixel (m,n) in which image? \n* In the last paragraph on Page 6, the result of the convolutional operations is passed through an inverse Fourier transform layer, followed by a data consistency  layer. What the data consistency layer is, is still not clear. \n* Continuing with the previous concern, the conversion between image and frequency domains seem to be arbitrary. There is an inverse Fourier transform operation after each block in the cascade, but it is not clear where the conversion to frequency domain happens.\n\nFurthermore, several modules in the network haven't been motivated enough. For example, why do we expect a cascaded architecture to imporove the result in this Laplacian pyramid decomposition scheme? \n\nFinally, the language needs to be modified substantially. \n\nBased on these concerns, I am not inclined to change my final rating."}, {"title": "Thanks for the recognition and our code will be released ", "comment": "Thanks for the recognition. We will release our code after it gets accepted. "}, {"title": "Top level reply", "comment": "We thank all the reviewers for recognizing our contributions in the novel design of the network structures and the better reconstruction abilities of our proposed method. The main concerns all lie in the lack of details for reproducing the results. \nTo address this major concern, we plan to open our codes to the public in the Github. \nFuthermore, new citations, figures and experimental results have been provided to facilitate the readers to better understand our methodology. \n\nThese details will be provided in the final manuscript.  Thanks. "}, {"title": "Details and our code will be supplemented and new experimental results on the public dataset have been given ", "comment": "Thank you for your comments.\n1. We will supplement the data splitting details and release our code in Github. All our data were splitting in patient wise.  We use the training set of the open source dataset fastMRI to train our network, the test set for validation, the validation set for evaluating. The details of the fastMRI dataset have been added.\n2. New results on the public dataset fastMRI have been supplemented to enhance the reproducibility.\n\n#index #Zero-filled #U-net #KIKI-net #Cascade-net #kspace learning #CLP-net\n#PSNR #23.23          #25.28  #25.95      #27.74             #27.78                    #28.19\n#SSIM  #0.518         #0.555   #0.583     #0.618             #0.622                     #0.633\n\n3. We have enhanced the results with zoomed-in region for the visual comparison.\n4. We have provided the explantions for the DC layer, namely data consistency layer. We also have supplemented a citation for others to refer to for more details. \n5. The typo on the $|Omega|$ have been revised.  \n6. The inputs and outputs are both magnitude for the U-net. We have added this in the revised manuscript. "}, {"title": "The ablation study  and the citations on GAN based MR imaging have been supplemented. ", "comment": "Thank you for your comments.\n1. Following your advices, we have added the citations on GAN based MR imaging. \n2. Regarding your question why laplacian pyramid and the complex convolution could improve the performance\uff1f We have supplemented the ablation study to investigate the two components' contributions in the final reconstruction. \n\n#                                           #SSIM     #PSNR\nLaplacian pyramid            #0.631     #28.16\nno Laplacian pyramid      #0.613     #27.33\n\n#                                           #SSIM     #PSNR\ncomplex convolution        #0.633     #28.19\nnormal convolution          #0.624     #27.96"}, {"title": "Details will be supplemented and the code will be released in the Github", "comment": "Many thanks for your comments. To address your concerns, we have made the following changes. \n1. Following the advices, we will supplement the details regarding the dataset, network architecture and release our code.  We also will display the results on the knee dataset as well. Thanks. \n\uff081\uff09We have added a more detailed figure of  the cascade structure, the shuffle downsample and shuffle upsample, and the convolution blocks;\n\uff082\uff09We redrew the figure of the network architecture;\n\uff083\uff09We use the L1 loss as our loss function; \n\uff084\uff09We present the results on the open source dataset fastMRI (knee) to enhance the reproducibility of our results;\n\n#index #Zero-filled #U-net #KIKI-net #Cascade-net #kspace learning #CLP-net\n#PSNR #23.23          #25.28  #25.95      #27.74             #27.78                    #28.19\n#SSIM  #0.518         #0.555   #0.583     #0.618             #0.622                     #0.633\n\n\uff085\uff09The inverse Fourier transform is embedded in the dc layer (data consistency layer), and we have added the details of the dc layer\uff1b\n\n2. Regarding the Fourier transform, it is a bulit-in function in pytorch. We indeed used backpropagation with autograd package in pytorch for the network training. The code for both training and testing will be released.\n\n3. We have added the results of the ablation study on the kernel size and the cascade structure.\n\n#kernel size   #SSIM   #PSNR   #GFLOPS\n#k=5               #0.633   #28.19    #196\n#k=9               #0.634   #28.21    #283\n#k=11             #0.635   #28.26    #345\n#k=13             #0.637   #28.29    #419\n\n#cascade        #SSIM    #PSNR    #GFLOPS\n#cascade=3    #0.624    #28.02    #117\n#cadcade=4    #0.628   #28.13     #156\n#cascade=5     #0.633   #28.19     #196\n#cascade=6     #0.635   #28.22     #235\n\n\n"}], "comment_replyto": ["AVR2VWl4O6h", "YldsSXgDMLo", "KgoCfJFRjgS", "M_j6ixHnuw", "0IeI8QS8N6", "sHGPY8r449I", "Izk7FSOeO", "zhJVQxBp59Q"], "comment_url": ["https://openreview.net/forum?id=NsBz-JrKr&noteId=iB7pZesVfDJ", "https://openreview.net/forum?id=NsBz-JrKr&noteId=AVR2VWl4O6h", "https://openreview.net/forum?id=NsBz-JrKr&noteId=YldsSXgDMLo", "https://openreview.net/forum?id=NsBz-JrKr&noteId=ufC2ZL2s4AG", "https://openreview.net/forum?id=NsBz-JrKr&noteId=DLBAU_nkBh4", "https://openreview.net/forum?id=NsBz-JrKr&noteId=ebgJ4I-CqRR", "https://openreview.net/forum?id=NsBz-JrKr&noteId=AW0oCZ7ps2g", "https://openreview.net/forum?id=NsBz-JrKr&noteId=KgoCfJFRjgS"], "meta_review_cdate": 1586203083878, "meta_review_tcdate": 1586203083878, "meta_review_tmdate": 1586203083878, "meta_review_ddate ": null, "meta_review_title": "MetaReview of Paper315 by AreaChair1", "meta_review_metareview": "The authors presented a robust rebuttal addressing the main concerns of the reviewers, providing more details and explanations about the method together with experiments in a public available dataset. Even if I agree with reviewer 3 that GAN-based reconstruction needs to be discussed more in the paper as it is used a lot from the community for reconstruction problems, I think that the methodology of the paper has merit and it can be interesting for the community. However,  I also encourage the authors to incorporate all the answers to the reviewers in their final version.", "meta_review_readers": ["everyone"], "meta_review_writers": ["MIDL.io/2020/Conference/Program_Chairs", "MIDL.io/2020/Conference/Paper315/Area_Chairs"], "meta_review_reply_count": {"replyCount": 0}, "meta_review_url": ["https://openreview.net/forum?id=NsBz-JrKr&noteId=hvEWbDRLcWk"], "decision": "accept"}