add RAM usage with CacheDataset and GPU consumtion warning
Browse files- README.md +19 -1
- configs/metadata.json +2 -1
- docs/README.md +19 -1
README.md
CHANGED
@@ -39,13 +39,25 @@ The segmentation of 104 tissues is formulated as voxel-wise multi-label segmenta
|
|
39 |
|
40 |
The training was performed with the following:
|
41 |
|
42 |
-
- GPU:
|
43 |
- Actual Model Input: 96 x 96 x 96
|
44 |
- AMP: True
|
45 |
- Optimizer: AdamW
|
46 |
- Learning Rate: 1e-4
|
47 |
- Loss: DiceCELoss
|
48 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
49 |
### Input
|
50 |
|
51 |
One channel
|
@@ -59,6 +71,12 @@ One channel
|
|
59 |
|
60 |
## Resource Requirements and Latency Benchmarks
|
61 |
|
|
|
|
|
|
|
|
|
|
|
|
|
62 |
### High-Resolution and Low-Resolution Models
|
63 |
|
64 |
We retrained two versions of the totalSegmentator models, following the original paper and implementation.
|
|
|
39 |
|
40 |
The training was performed with the following:
|
41 |
|
42 |
+
- GPU: 48 GB of GPU memory
|
43 |
- Actual Model Input: 96 x 96 x 96
|
44 |
- AMP: True
|
45 |
- Optimizer: AdamW
|
46 |
- Learning Rate: 1e-4
|
47 |
- Loss: DiceCELoss
|
48 |
|
49 |
+
### Memory Consumption
|
50 |
+
|
51 |
+
- Dataset Manager: CacheDataset
|
52 |
+
- Data Size: 1000 3D Volumes
|
53 |
+
- Cache Rate: 0.4
|
54 |
+
- Single GPU - System RAM Usage: 83G
|
55 |
+
- Multi GPU (8 GPUs) - System RAM Usage: 666G
|
56 |
+
|
57 |
+
### Memory Consumption Warning
|
58 |
+
|
59 |
+
If you face memory issues with CacheDataset, you can either switch to a regular Dataset class or lower the caching rate `cache_rate` in the configurations within range $(0, 1)$ to minimize the System RAM requirements.
|
60 |
+
|
61 |
### Input
|
62 |
|
63 |
One channel
|
|
|
71 |
|
72 |
## Resource Requirements and Latency Benchmarks
|
73 |
|
74 |
+
### GPU Consumption Warning
|
75 |
+
|
76 |
+
The model is trained with 104 classes in single instance, for predicting 104 structures, the GPU consumption can be large.
|
77 |
+
|
78 |
+
For inference pipeline, please refer to the following section for benchmarking results. Normally, a CT scans with 300 slices will take about 27G memory, if your CT is larger, please prepare larger GPU memory or use CPU for inference.
|
79 |
+
|
80 |
### High-Resolution and Low-Resolution Models
|
81 |
|
82 |
We retrained two versions of the totalSegmentator models, following the original paper and implementation.
|
configs/metadata.json
CHANGED
@@ -1,7 +1,8 @@
|
|
1 |
{
|
2 |
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
|
3 |
-
"version": "0.1.
|
4 |
"changelog": {
|
|
|
5 |
"0.1.5": "fix mgpu finalize issue",
|
6 |
"0.1.4": "Update README Formatting",
|
7 |
"0.1.3": "add non-deterministic note",
|
|
|
1 |
{
|
2 |
"schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20220324.json",
|
3 |
+
"version": "0.1.6",
|
4 |
"changelog": {
|
5 |
+
"0.1.6": "add RAM usage with CacheDataset and GPU consumtion warning",
|
6 |
"0.1.5": "fix mgpu finalize issue",
|
7 |
"0.1.4": "Update README Formatting",
|
8 |
"0.1.3": "add non-deterministic note",
|
docs/README.md
CHANGED
@@ -32,13 +32,25 @@ The segmentation of 104 tissues is formulated as voxel-wise multi-label segmenta
|
|
32 |
|
33 |
The training was performed with the following:
|
34 |
|
35 |
-
- GPU:
|
36 |
- Actual Model Input: 96 x 96 x 96
|
37 |
- AMP: True
|
38 |
- Optimizer: AdamW
|
39 |
- Learning Rate: 1e-4
|
40 |
- Loss: DiceCELoss
|
41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
42 |
### Input
|
43 |
|
44 |
One channel
|
@@ -52,6 +64,12 @@ One channel
|
|
52 |
|
53 |
## Resource Requirements and Latency Benchmarks
|
54 |
|
|
|
|
|
|
|
|
|
|
|
|
|
55 |
### High-Resolution and Low-Resolution Models
|
56 |
|
57 |
We retrained two versions of the totalSegmentator models, following the original paper and implementation.
|
|
|
32 |
|
33 |
The training was performed with the following:
|
34 |
|
35 |
+
- GPU: 48 GB of GPU memory
|
36 |
- Actual Model Input: 96 x 96 x 96
|
37 |
- AMP: True
|
38 |
- Optimizer: AdamW
|
39 |
- Learning Rate: 1e-4
|
40 |
- Loss: DiceCELoss
|
41 |
|
42 |
+
### Memory Consumption
|
43 |
+
|
44 |
+
- Dataset Manager: CacheDataset
|
45 |
+
- Data Size: 1000 3D Volumes
|
46 |
+
- Cache Rate: 0.4
|
47 |
+
- Single GPU - System RAM Usage: 83G
|
48 |
+
- Multi GPU (8 GPUs) - System RAM Usage: 666G
|
49 |
+
|
50 |
+
### Memory Consumption Warning
|
51 |
+
|
52 |
+
If you face memory issues with CacheDataset, you can either switch to a regular Dataset class or lower the caching rate `cache_rate` in the configurations within range $(0, 1)$ to minimize the System RAM requirements.
|
53 |
+
|
54 |
### Input
|
55 |
|
56 |
One channel
|
|
|
64 |
|
65 |
## Resource Requirements and Latency Benchmarks
|
66 |
|
67 |
+
### GPU Consumption Warning
|
68 |
+
|
69 |
+
The model is trained with 104 classes in single instance, for predicting 104 structures, the GPU consumption can be large.
|
70 |
+
|
71 |
+
For inference pipeline, please refer to the following section for benchmarking results. Normally, a CT scans with 300 slices will take about 27G memory, if your CT is larger, please prepare larger GPU memory or use CPU for inference.
|
72 |
+
|
73 |
### High-Resolution and Low-Resolution Models
|
74 |
|
75 |
We retrained two versions of the totalSegmentator models, following the original paper and implementation.
|