Question about validation dataset
Hi there,
We notice that the number of the internal validation dataset you mention in the figure has 1564 scans, while your proposed validation set in HF and in the paper has 1304 cases/3039 scans. Could you tell me which subset you use to validate your model?
Thanks
Hi @DwanZhang ,
The validation split is the one in provided in the repository. There are 1304 patients, each of whom has at least one scan (but possibly multiple). There are 1564 scans in total. Each scan has different DCM series for different reconstructions, due to the different kernels used in the preprocessing step. Thus, the number of reconstructions is 3039. I hope this makes it clearer!
Hello,@DwanZhang
I would like to ask how did you download this such a huge dataset.
Thanks.
A large disk space is required
Thank you for your reply. I would like to ask about the approximate amount of disk space needed. And I would like to know what method you used to download it, because it always tends to break when I download it.
I am not sure about the whole dataset, I download and preprocess the dataset at the same time, the preprocessed dataset is about 12t. I have uploaded it to the hf. The break down may be caused by the unstable network connection. I use servers from Hongkong and America.
Hi @DwanZhang and @QuiNiang,
I recommend downloading the data files one by one instead of directly cloning the entire repository, as cloning sometimes results in connection errors. You can use the scripts I’ve provided here: https://github.com/sezginerr/example_download_script.
Additionally, the nii.gz images are currently in float64 format, which is unnecessary. We’ve been discussing this, and I will convert them to int16 shortly, which will significantly reduce the dataset size. I’ll update the dataset soon. I wanted to let you know on this.
Hope this helps!
Best,
Sezgin
Hello,
@DwanZhang
and
@sezginer
,
Thank you very much for your reply and have a nice life.
Thanks.