Some questions regarding the paper

#4
by maximek3 - opened

Thanks so much for this work and for releasing the dataset publicly!

I have a few questions that I wasn't able to find out from the paper:

  • What exactly is happening in the denoising and resizing step during pre-processing of the data?
  • What are the final dimensions H', W', D' that you use for your model and to obtain the results e.g. in Table 2.
  • In Table 1, the median value is outside the range of values. How is this possible?
  • I assume the results you show in the paper are based on the test set? Are you able to provide a list with the exact image id's belong to each of the train/dev/test splits?

Thank you!

Thank you for your interest in our work and for your detailed questions. We're happy to clarify the points you've raised.

  1. Denoising and Preprocessing:
    For image denoising, we use the standard SimpleITK library (https://github.com/SimpleITK/SimpleITK). Specifically, we apply the CurvatureFlow method for denoising, as shown in the following code snippet:

    def denoise_image(image):
        denoised_image = sitk.CurvatureFlow(image1=image, timeStep=0.125, numberOfIterations=5)
        return denoised_image
    

    The .nii.gz files we provided are already pre-processed, including the denoising step, and can be directly used.

  2. Final Image Dimensions:
    The final dimensions (H', W', D') used for our model are 256, 256, and 128, respectively.

  3. Median Value in Table 1:
    Apologies for the confusion in Table 1. We mistakenly mixed up the order of the minimum and median values. We will correct this in the updated version of the paper on arXiv.

  4. Train/Dev/Test Split:
    We used a 7:1:2 split for the dataset. You can determine the split based on the naming of the files in the provided zip file: the first seven files are for training, and the last two are for the test set.
    We are also in the process of releasing a CSV file that will contain the exact filenames for the train, validation, and test sets. Please stay tuned for this update.

Once again, thank you for your support, and we appreciate your understanding. If you have any further questions or need additional clarification, feel free to reach out.

Best regards,
Yinda Chen

Thanks for the clarifications!

Another question: is the batch size (96) reported in Table 1 in the supplementary material per device or across all 8 GPUs (i.e., 12 per GPU)?

We are using A40 GPUs, each with 48GB of memory. The batch size reported (96) is the total batch size across all GPUs. Additionally, we enabled mixed-precision training in our setup.

Sign up or log in to comment