id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st179768 | Hello @Milas, I’m running into the same kind of issues. Could you elaborate as to why the order of the data loaded has to be deterministic ? |
st179769 | torch.utils.data.distributed.DistributedSampler uses the epoch number as a seed to deterministically randomize the data to create coherent bigger batches (i.e. a different subsample of the dataset is seen each time through training and the samples do not overlap between the different training instances.)
Also do you scale the learning rate according to the number of instances you spawn ? |
st179770 | Thank you very much for your help. Turns out for me the problem was in my custom loss function. |
st179771 | Hi, I’m facing a problem. When using DistributedDataParallel with NCCL, my training will meet a deadlock. According to the pytorch doc, I try to set the set_start_method to spawn and forkserver, but an error that address already in use occurs. |
st179772 | Faced a similar error - solved it by initializing the process group first, and then setting the model cuda device (as opposed to the other way around, which led to the same kind of deadlock you describe) |
st179773 | In the documentation of torch.distributed it is not made clear that all tensors in gather() need to be of the same size. After reading the code I noticed that in MPI c++ code the function used is MPI_Gather which imposes that restriction. I was thinking that maybe it would make more sense to use MPI_Gatherv instead so that tensors of variable size can be accepted. I would try to implement it myself but my C++ skills are not that good. If anyone is interested and wants to create a pull request, it would be great!. This is the file I am referring to: https://github.com/pytorch/pytorch/blob/eb76b7a564121c5fede749ad7d0a36f2b61a0a95/torch/lib/c10d/ProcessGroupMPI.cpp#L430 4
At least if someone can provide a guide on how to make the change for MPI_Gatherv I can make the change for MPI_Scatterv myself.
For MPI_Gatherv we need to provide a list of integers instead of a single integer. This should be easy by calling numel() on each tensor in the gather_list. We also need to provide displacements(see here: https://www.mpich.org/static/docs/v3.1/www3/MPI_Gatherv.html 4). We could set this to always be the previous displacement plus the last tensor’s numel() and by setting the first displacement to 0. This is if we want to modify the built-in gather. This way it accepts tensors of any size so it covers the case of equal sized tensors. Alternatively, we could create a new torch.distributed.gatherv()
Any help would be appreciated |
st179774 | This is a great idea. I agree having both gather and allgather for variable size is very useful. Also, seeing as the displacement is an implementation detail, the underlying C++ implementation can take care of sharing each process’ contribution, allocating the required memory, and returning the list of tensors.
I created https://github.com/pytorch/pytorch/issues/23299 8 to track the feature. |
st179775 | Hello, I followed the online DataParallel tutorial and I can’t get the model to split compute evenly among different GPUs at score-time (forward pass of trained model). On 3 GPUs, I get something like this:
±----------------------------------------------------------------------------+
| NVIDIA-SMI 410.79 Driver Version: 410.79 CUDA Version: 10.0 |
|-------------------------------±---------------------±---------------------+
| GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
|===============================+======================+======================|
| 0 Tesla V100-SXM2… On | 00001961:00:00.0 Off | 0 |
| N/A 53C P0 224W / 300W | 15248MiB / 16130MiB | 93% Default |
±------------------------------±---------------------±---------------------+
| 1 Tesla V100-SXM2… On | 00003673:00:00.0 Off | 0 |
| N/A 49C P0 86W / 300W | 7004MiB / 16130MiB | 6% Default |
±------------------------------±---------------------±---------------------+
| 2 Tesla V100-SXM2… On | 00005A1F:00:00.0 Off | 0 |
| N/A 54C P0 76W / 300W | 6996MiB / 16130MiB | 85% Default |
±------------------------------±---------------------±---------------------+
So usually GPU 0 and 2 are loaded and 1 is underutilized. Also I get a very large lag in-between batches, almost 1-2 seconds of idle time when all three GPUs are at 0%, then they do some compute, then go to 0% again.
My guess is that syncing on GPU 0 is the culprit - is there a way I can run distributed operation on multiple GPUs for scoring in pyTorch to obtain even memory usage and compute across multiple GPUs? Notice how this is different from training as I’m not computing the loss and aggregating gradients.
The code is here: https://github.com/waldeland/CNN-for-ASI/blob/master/test_parallel.py 7 and I already tried calling .to(device) before DataParallel and specifying “device_ids” - nothing seems to work. Another option would be to use DistributedDataParallel I suppose, but I want to understand why this isn’t working first. |
st179776 | What is the batch size of the input you pass to the nn.DataParallel wrapped model?
The input is split as evenly as possible over the devices you want to use. But if the split is odd, if you’re splitting over 3 GPUs, then it is possible for a subset of GPUs to have suboptimal performance. For example, a batch size of 11 is going to be much worse than a batch of 8. If you don’t have enough data for an even split, where every GPU gets a power-of-two sized batch, you can always fill it back up with garbage tensors, since you’re only doing inference. |
st179777 | It’s in the code: 2^12=4096. The model we’re using has a fairly small memory footprint and we want to use large batches to maximize GPU memory utilization for bulk scoring.
I get this behavior on 2-8 GPUs, not just 3, so the odd number of GPUs shouldn’t be a factor. Do you think I should make batch size a multiple of the number of GPUs? |
st179778 | Have you tried running a profiler (like nvprof) to see if there is anything preventing the GPUs from making forward progress? This would show you if there is any imbalance between the work the GPUs perform. |
st179779 | The problem is that although one can distribute forward-pass and not have it collect on one GPU, there is no way to distribute data across GPUs evenly in DataParallel: the batch goes on GPU0 (or one GPU of your choice), and then that batch get split into further minibatches on other GPUs; as a result GPU0 becomes the memory bottleneck - this article explains it well https://medium.com/huggingface/training-larger-batches-practical-tips-on-1-gpu-multi-gpu-distributed-setups-ec88c3e51255 29
This behavior of DataParallel isn’t an issue for large models because size(model)>size(batch), but in our case size(model)<<size(batch). |
st179780 | I see – perhaps the better approach then is to create your own version of nn.DataParallel that scatters straight from CPU to the right destination device. Then you don’t pay the cost of first going to GPU 0 and then scatter from there to the other GPUs.
edit: It looks like nn.DataParallel already supports this if you just keep your input on CPU. |
st179781 | If I send multiple tensors to another process, does the receiving process receive tensors by the order irecv is called? |
st179782 | I noticed that for distributed data parallel, you only need to specify the ip address and port of rank0 node, and then during initialization all nodes discover each other through rank0 node. But due to certain firewall restrictions, I want to manually specify the ip address and port of each node via which they should communicate for all reduce operations. Is there a way to do that? I am open to make changes in the pytorch source code. |
st179783 | There is no way to do this today. It also depends on which distributed backend you’re using whether this would be possible in the first place. If you’re using Gloo, it might be possible, but it’s quite a bit of work. If you’re using NCCL, it’s up to NVIDIA. If you’re using MPI, I don’t know. |
st179784 | link to doc
According to the doc page, this is how you initialize a shared file-system.
dist.init_process_group(backend, init_method='file:///mnt/nfs/sharedfile',
world_size=4, rank=args.rank)
And in the warning box above the code in the doc page, it says this code creates a file. Does this mean that I can’t use it as a directory? |
st179785 | No, the path has to be a file. If all your processes gracefully terminate the file will be removed. If one of your processes crashes, it may not be deleted and you’ll have to delete it yourself. |
st179786 | I’m training a pretty standard WGAN-GP on MNIST, and I’m trying out much larger batch sizes since that seems to be the standard wisdom now. When I parallelize across multiple GPUs I get enormous losses compared to using just one GPU. If I initialize my networks with nn.DataParallel(model, [0]) then I get pretty normal functionality:
ITER 0: D cost 1.200, G cost 2.729
ITER 200: D cost -3.931, G cost 2.298
But if I use nn.DataParallel(model, [0, 1, 2]) to run across more GPUs, I get absurd numbers:
ITER 200: D cost 112856899584.0, G cost 456269.437
I’ve used dataparallel successfully before with classifiers and the tutorial DCGAN, so I have no idea what the issue is here. I’m not parallelizing the loss or doing anything different than initializing the networks in this way. Are there some caveats here (like with FP16) that I’m not aware of with DataParallel?
I can post a full code example, but its a hundred lines or so and I’d rather not start off the post like that. |
st179787 | The DataParallel module is pretty straightforward: it splits the input up into N chunks (typically across the first dimension), runs the same forward pass on N replicas of your model, and gathers the output back into a single tensor (across the same dimension as the input was split). Gradients are always accumulated in the source model (not the replicas).
It looks like updates to buffers in these replicas don’t propagate back to the source model, as the model replicas are tossed after every forward pass. Perhaps this is a starting point for your investigation? |
st179788 | @pietern even if the gradients didn’t accumulate properly, it should at worst look as if I only have one GPU. I’ve verified that everything works well with up to 4 GPUs with pytorch 0.4.1. But I can’t seem to nail down the actual problem in 1.0
I’m not even sure how to diagnose this, but I’ve been able to replicate this behavior with dataparallel on 2 other popular WGAN repos from github. |
st179789 | I had the same problem with with wp-gan. For some reasons, the gradient penalty increases quickly until it reaches inf.
Also, this happens for other gan too, e.g Self supervised gan. The behavior of loss function is also completely different when training with multi gpus. |
st179790 | @Hung_Nguyen Right.
Remove the gradient penalty and the loss should still endlessly increase.
The official(?) Wasserstein GAN code doesn’t suffer from this weird behavior with parallelization, so that could be a starting point. It uses nn.parallel.data_parallel - the functional version of nn.DataParallel, but I don’t know if there’s an interesting difference there.
Converting any WGAN-GP repo into a regular WGAN mitigates the behavior somewhat, but weight clipping has its drawbacks. |
st179791 | Currently also experiencing this issue. The gradient penalty eventually goes to the millions before blowing up completely. @neale, I have tried to reproduce this issue with various popular WGAN-GP repos as well, and they also suffer from this.
Pytorch 1.0.1 on CUDA 10.1. |
st179792 | @neale With 0.4 working and 1.0 not working this is clearly a regression. But we haven’t significantly modified (if at all) the data parallel wrapper between these versions. Can you check if the regression happened in 1.0.0 or 1.0.1? I have created https://github.com/pytorch/pytorch/issues/19024 60 to track this. |
st179793 | @neale Could you also try reproducing with the nightly build? There have been some changes recently related to CUDA stream synchronization that may have fixed this, per @mrshenli. |
st179794 | Running into this problem as well, using a WGAN-GP and it works perfectly on 1 GPU but the loss explodes when running on multiple GPUs.
Using CUDA 9.2, PyTorch 1.0.1. Working on installing the nightly build to see if there is any difference. |
st179795 | @pietern Sorry this took quite some time.
I can confirm that the issue persists into version 1.1 |
st179796 | It seems like the issue is #16433 44.
A workaround would be to calculate the gradient-penalty directly (without calling a function to do so) and calling backward in the same scope.
For example the following code will explode on CUDA with multi-gpu:
gp = calc_grad_penalty(network, real_target, fake_target)
gp.backwards(retain_graph=True)
While the following does not:
### gp += torch.autograd.grad()
### etc. etc. code to calcualte GP
gp.backwards(retain_graph=True) |
st179797 | Thanks, @aluo-x.
It is also the same as #16532 9 and there has been an attempt at a fix. The problem lies somewhere deep in the guts of autograd. This has surfaced a couple of times and there should be a fix soon (and it should be included in the next stable release). |
st179798 | @pietern you mentioned that gradient accumulation only occurs in the source model, but according to https://github.com/pytorch/pytorch/blob/master/torch/nn/parallel/data_parallel.py 11 “In the forward
pass, the module is replicated on each device, and each replica handles a
portion of the input. During the backwards pass, gradients from each replica
are summed into the original module.” this implies that gradients are accumulated in the leaf nodes of each of the replicas. |
st179799 | FYI, excellent debugging from @mrshenli and @ezyang deep in the guts of autograd led to https://github.com/pytorch/pytorch/pull/22983 80 and this was merged yesterday. Please give the latest nightly builds a try to see if fixed the issue. |
st179800 | I have a machine with 10 GPUs and my utilization is quite bad (<50% spent training) because of the computation of the metrics I have to track each epoch (not everything can be easily parallelized across 10 GPUs and they are quite involved). I have already reduced the resolution for many metrics. They are very important to me and I can’t reduce them further. For some metrics, I generate matplotlib-plots, which is also quite costly.
I am thinking that switching from a purely sequential (train -> eval -> train -> eval …) to a parallel setup would greatly speed up my model (So for example 8-9 GPUs are constantly occupied with training and 1-2 with evaluating my metrics). Is this possible with pytorch? The only examples I’ve found are about parallelizing the training, but this is already working.
I would have to clone my model and push it to my evaluation-workers. |
st179801 | If you push a copy of the current model to GPU9 and execute the evaluation method, it should run asynchronously while your other GPUs are training.
Note that your data loading might become a bottleneck, if not multiple DataLoaders are trying to read from your drive. |
st179802 | Ok, I’ll test it.
If you push a copy of the current model to GPU9 and execute the evaluation method, it should run asynchronously while your other GPUs are training.
Could this be implemented using torch.multiprocessing or what can you recommend? |
st179803 | CUDA calls should run asynchronously by default.
Let me know, if you encounter an issue. |
st179804 | I am building a recommendation system inspired by YouTube’s “Deep Neural Networks for YouTube Recommendations” paper. I will need to execute recommendations in real time so I structured it with low latency predictions in mind. The structure is the following
|User Features| |Item Features|
| |
|Fully Connected NN_user| |Fully Connected NN_item|
\ /
|Concatenated output of both NNs|
|
|Fully Connected NN|
|
|output|
This is all one network built using two sub-networks.
The reason I did it this way is to create rich embeddings for the user and item based on their features which I could then store. At prediction time, I can retrieve the stored embeddings, then only the top NN needs to be executed and is therefore very fast. In testing, the model gives good results.
My question is about decreasing the time it takes to train this model. Is there a way for Pytorch to execute the sub-networks in parallel? Using DataParallel splits that data and trains it in parallel, but I think that the two sub-NN are trained one after the other, even though they don’t need to be. The forward section of the model has the following structure:
def sub-network(features, **params):
....
def forward(user_features, item_features):
user_embedding = sub-network(user_features)
item_embedding = sub-network(item_features)
x = torch.cat([user_embedding, item_embedding],1)
...
What is a good strategy for parallelizing the execution of the sub-network functions? |
st179805 | You do have several GPUs to make this worthwile, right?
Given the asynchronous nature of of GPU computation, you can just move one network and inputs to the second GPU. Then it will be queued serially, but executed in parallel. Just be sure to not introduce sync points.
Or you could look at the multiprocessing best practices 3 for advice.
Best regards
Thomas |
st179806 | I’m relatively new to pytorch, but have good experience with Keras & Tensor flow. I’ve followed this article: DistributedDataParallel 6 to use DDP on my own training script. But for some reason, I always end up getting process 0 terminated with exit status 1.
Here’s how my functions related to DDP look like:
def setup(rank, world_size):
os.environ['MASTER_ADDR'] = 'localhost'
os.environ['MASTER_PORT'] = '12355'
# initialize the process group
dist.init_process_group("gloo", rank=rank, world_size=world_size)
# Explicitly setting seed to make sure that models created in two processes
# start from same random weights and biases.
torch.manual_seed(42)
def cleanup():
dist.destroy_process_group()
def run_demo(fn, *args):
mp.spawn(fn,
args = (args[0],args[1], args[2], args[3], args[4]),
nprocs = 1 # Also tried 2 , but no difference
join = True
)
And here’s how my train function looks like:
def train(model, X, batch_size = 32, epochs = 75, gradient_acc = 0):
setup(1, 2)
device = model.get_default_device()
model = model.to(device, non_blocking = True)
ddp_model = DDP(model, device_ids = [0]) # Only one GPU
..
..
..
..
ddp_model.hidden_enc = ddp_model.init_hidden_enc()
ddp_model.hidden_dec = ddp_model.init_hidden_dec()
ddp_model.train()
for ep in range(epochs):
loss_br = 0; nb_batch_steps = 0
for step, batch in enumerate( data_loader ):
batch = batch.to(device, non_blocking = True)
nb_batch_steps += 1
loss = ddp_model(batch)
..
..
..
cleanup()
I’m calling the run_demo function in this way:
if __name__ == "__main__":
run_demo(train, model,
holder[:], 32,
75,3 )
I can make out that some process in the system is failing and that’s the reason why spawn.py is raising that error. But, I’m not sure how to rectify that issue. If I call my train function directly without intervention of run_demo the code never executes and programs seems to go in infinite loop.
I’m on Google Colab, with single GPU.
P.S: My lscpu command results in:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Byte Order: Little Endian
CPU(s): 2
On-line CPU(s) list: 0,1
Thread(s) per core: 2
Core(s) per socket: 1
Socket(s): 1
NUMA node(s): 1
Vendor ID: GenuineIntel
CPU family: 6
Model: 63
Model name: Intel(R) Xeon(R) CPU @ 2.30GHz
Stepping: 0
CPU MHz: 2300.000
BogoMIPS: 4600.00
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 32K
L1i cache: 32K
L2 cache: 256K
L3 cache: 46080K
NUMA node0 CPU(s): 0,1
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology nonstop_tsc cpuid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single pti ssbd ibrs ibpb stibp fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat arch_capabilities
Any help is highly appreciated. Thanks ! |
st179807 | Hello!
There are at least 2 things off in this example:
mp.spawn calls the specified function with the local rank as first argument. This means that the arguments you pass are off by 1 (this is likely what causes the first error).
You’re calling setup(1, 2) even if you run with a single process. This will cause a hang followed by a timeout after 30 minutes.
Good luck! |
st179808 | I’m rather confused after reading both official tutorials on multi-GPU and data-parallelism. Can I wrap the whole model in nn.Parallel, instead of one layer at a time?
e.g. is the following code block legitimate?
import torch
import torch.nn as nn
class DataParallelModel(nn.Module):
def __init__(self):
super().__init__()
self.block1 = nn.Linear(10, 20)
self.block2 = nn.Linear(20, 20)
self.block3 = nn.Linear(20, 20)
def forward(self, x):
x = self.block1(x)
x = self.block2(x)
x = self.block3(x)
return x
model = DataParallelModel()
model = nn.DataParallel(model)
Thanks! |
st179809 | Hi @yuqli! Yep, running the code as you have it works perfectly. If you print out your model, you can see that the whole thing is wrapped in a DataParallel wrapper.
>>> model
DataParallel(
(module): DataParallelModel(
(block1): Linear(in_features=10, out_features=20, bias=True)
(block2): Linear(in_features=20, out_features=20, bias=True)
(block3): Linear(in_features=20, out_features=20, bias=True)
)
)
There’s some more info on DataParallel and how it works in this forum post 18, where @rasbt gives a good diagram.
Hope that answers your question!
(If you’re curious as to how the inner workings of DataParallel function, @fantasticfears has a great post on their blog 3). |
st179810 | Hello everyone, I created this pip package that includes differentiable versions of scatter/gather/send/recv so that pytorch’s autograd can backpropagate through those. I thought I should share. I haven’t thoroughly tested it so apologies if something breaks. Contributions are welcome!
GitHub
ag14774/diffdist 3
Contribute to ag14774/diffdist development by creating an account on GitHub.
There is some example code here: https://github.com/ag14774/diffdist/blob/master/diffdist/testing.py 21 |
st179811 | The following code from the tutorial to pytorch data paraleelism 24 reads strange to me:
device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
model = Model(input_size, output_size)
if torch.cuda.device_count() > 1:
print("Let's use", torch.cuda.device_count(), "GPUs!")
# dim = 0 [30, xxx] -> [10, ...], [10, ...], [10, ...] on 3 GPUs
model = nn.DataParallel(model)
model.to(device)
According to my best knowledge, mode.to(device) copy the data to GPU.
DataParallel splits your data automatically and sends job orders to multiple models on several GPUs. After each model finishes their job, DataParallel collects and merges the results before returning it to you.
If the DataParallel does the job of copying, what does the to(device) do here? |
st179812 | Solved by pietern in post #4
On calling forward it splits the input into multiple chunks (one chunk per GPU), replicates the underlying model to multiple GPUs, runs forward on each of them, and gathers the outputs. |
st179813 | On calling forward it splits the input into multiple chunks (one chunk per GPU), replicates the underlying model to multiple GPUs, runs forward on each of them, and gathers the outputs. |
st179814 | Thank you. I think I need to read more core code of pytorch to fully understand. |
st179815 | I encountered a problem that it seems the data movement operation takes long time in parallel forward, but I can’t find the reason. The main codes are as follow,
def main():
model = nn.DataParallel(MyModel())
model = model.cuda()
before_time = time.time()
logits = model(input)
class MyModel(nn.Module):
def forward(self, input):
start_time = time.time()
xxx
end_time = time.time()
return out
I use 4 V100 GPU in parallel, and num_workers set to 0.
The start_time - before_time takes 0.6s, and the actual forward time end_time-start_time only takes 0.15s. Data batch is (512 * 3 * 224 * 224).
I am curious that the actual forward time is only 0.15 seconds, but what operations takes 0.6 seconds during before_time and start_time? Is it the data movement operation? I don’t think that such few data would cost 0.6 seconds because 0.6s is too long for moving the 77M data. |
st179816 | I suspect that calling model.cuda() on nn.DataParallel is causing trouble. Can you try creating the model first, then calling model.cuda(), and then wrapping it in nn.DataParallel? I think that in this example the entire model is copied to GPU every time you call forward. |
st179817 | Thank you! I will try it and then reply. The original style is inspired from DARTS 1. |
st179818 | I have a DataParallel model that has been sent to the GPUs via .to('cuda'). I also have some processes calling this model in parallel at various points. It seems like because these are forward passes of batch size 1, they are automatically allocated to CUDA:0, which results in disproportionately high GPU utilization on that device.
How do I specify which GPU is used in a forward pass? I don’t want to have to do any sending of parameters / state dicts. Thanks. |
st179819 | I do not think there is any option/parameter to tell DataParallel which GPUs to use for inference/forward. If you are only doing inference, won’t it be easy to maintain models in each GPU manually (mode.to(device)) instead of DataParallel? |
st179820 | Yes, I’m only doing inference. I thought that if we had model_new=model.to('cuda:1'), then after an update to model, the parameters wouldn’t be synced. Is that not right? Thanks. |
st179821 | If you are only doing inference in .eval() (not .train() mode), there is no need for parameter sync. Isn’t it? |
st179822 | I thought that if you made a model model_new = model.to('cuda:1'), and then updated the parameters of model with model_optimizer.step(), then the parameters of model_new would be out of sync / differ from model? |
st179823 | According to the tutorial 18, DataParallel splits “data” automatically across available GPUs. I’m pretty sure it only works on batches, so you need batches of more than 1 sample, otherwise it might (a) make no sense to split data, (b) be very inefficient due to synchronization…
Do you even have any usage on other GPUs than the first one? If you have batch sizes of 1, nothing would be split across GPUs and only CUDA:0 would be used. |
st179824 | Indeed, batch size 1 with DataParallel goes to first specified device (or defaults to cuda:0).
If you want to do inference with batch size 1 there is no need to use nn.DataParallel. This would be useful only if you have a much larger batch that you want to automatically split and automatically run on multiple GPUs. If you want to manually balance batches of size 1 you’re going to have to copy the model yourself and round robin over it. You’re right that the weights are not automatically updated if the source model is updated, because they are different tensors at that point. You’ll have to re-initialize the per-device models every time after running an optimizer on a single source model. In fact, this is exactly how nn.DataParallel works under the covers. On every call to forward it replicates a single module over N devices, scatters the input, runs forward on every one of them, and gathers the outputs. This repeats for every iteration. |
st179825 | Since I want to adopt multi-scale training for object detection, the image sizes would be changed per fixed frequency. When using dataparallel, the image sizes trained on each GPU can be synchronized.
To speed up the training phase, I want to use DistributedDataParallel, but I don’t know how to synchronize the input image size. Any suggestions, please? |
st179826 | Hi!
One way to do it is to resize your batches in a custom collate function that you send to your dataloader. This collate function would make sure that all the images within one batch have the same dimension.
I would try to get this working before you start with the DistributedDataParallel. Actually, depending on your multi-scale schedule (fixed-frequency), the DistributedDataParallel might not offer any difficulties once you got the collate function up and running.
Google + search at these forums on how to implement a custom collate function for the dataloader and give me a poke if you want to talk something over.
Good luck
Edit: Since it’s object detection you also need to transform your bounding boxes. I recommend using imgaug 2 for this, so much easier |
st179827 | As is given here 17:
torch.nn.DataParallel is a model wrapper that enables parallel GPU utilization. To save a DataParallel model generically, save the model.module.state_dict() . This way, you have the flexibility to load the model any way you want to any device you want.
Considering the discussion just above this, of saving GPU models and loading on CPU etc. , I’m guessing this line refers to the data distributed models being on any one of the available GPUs and the model.module being the underlying module part that somehow will be device agnostic. (Please correct me here)
That being said, what happens to the optimizer internal state variables (mentioned here 18). They too would be on whichever GPU rank they were saved from. Is this issue just solved by using the map_location argument to torch.load()? If so, why the special treatment for the DistributedDataModel (model.module.state_dict() used while saving)? |
st179828 | Solved by pietern in post #2
That’s correct. If you try to serialize the nn.DataParallel module itself then it contains the list of devices you parallelize for, the dimension to split the input batch on, etc. When you serialize the inner module then none of that is included and is up to you to do again (or not) after you load … |
st179829 | shubhvachher:
Considering the discussion just above this, of saving GPU models and loading on CPU etc. , I’m guessing this line refers to the data distributed models being on any one of the available GPUs and the model.module being the underlying module part that somehow will be device agnostic. (Please correct me here)
That’s correct. If you try to serialize the nn.DataParallel module itself then it contains the list of devices you parallelize for, the dimension to split the input batch on, etc. When you serialize the inner module then none of that is included and is up to you to do again (or not) after you load it.
shubhvachher:
That being said, what happens to the optimizer internal state variables (mentioned here ). They too would be on whichever GPU rank they were saved from. Is this issue just solved by using the map_location argument to torch.load() ? If so, why the special treatment for the DistributedDataModel ( model.module.state_dict() used while saving)?
Any optimizer state will likely include the devices that those state variables live on. You’re correct to say you can use map_location to remap at load time. Alternatively, you can copy the optimizer state to CPU first, then serialize, and then not worry about it at load time. What special treatment exactly are you talking about? |
st179830 | pietern:
the nn.DataParallel module itself then it contains the list of devices you parallelize for, the dimension to split the input batch on, etc. When you serialize the inner module then none of that is included and is up to you to do again (or not) after you load it.
Nothing more! My model is training but the error doesn’t seem to be coming down… I was just exploring the possibilities… This clears it up! Thanks |
st179831 | My dataset is small, and I want to load all my dataset into GPU memory when a dataset is created. Meanwhile, I still want to use torch.utils.data.DataLoader because of compatibility with other situations where I load my data on the fly.
My short working example is as follows.
import numpy as np
from torch.utils.data import TensorDataset, DataLoader
import torch
data = np.array([[1,2,3], [4,5,6]])
# I move dataset to GPU first
ds = TensorDataset(torch.Tensor(data).cuda())
dl = DataLoader(ds, batch_size=1, num_workers=1, shuffle=True)
for x in dl:
print(x)
However, it crashes.
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-10-11b3bb8f6574> in <module>
6 ds = TensorDataset(torch.Tensor(data).cuda())
7 dl = DataLoader(ds, batch_size=1, num_workers=1, shuffle=True)
----> 8 for x in dl:
9 print(x)
~/.conda/envs/ml/lib/python3.6/site-packages/torch/utils/data/dataloader.py in __next__(self)
635 self.reorder_dict[idx] = batch
636 continue
--> 637 return self._process_next_batch(batch)
638
639 next = __next__ # Python 2 compatibility
~/.conda/envs/ml/lib/python3.6/site-packages/torch/utils/data/dataloader.py in _process_next_batch(self, batch)
656 self._put_indices()
657 if isinstance(batch, ExceptionWrapper):
--> 658 raise batch.exc_type(batch.exc_msg)
659 return batch
660
RuntimeError: Traceback (most recent call last):
File "/home/swyoon/.conda/envs/ml/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in _worker_loop
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/swyoon/.conda/envs/ml/lib/python3.6/site-packages/torch/utils/data/dataloader.py", line 138, in <listcomp>
samples = collate_fn([dataset[i] for i in batch_indices])
File "/home/swyoon/.conda/envs/ml/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 40, in __getitem__
return tuple(tensor[index] for tensor in self.tensors)
File "/home/swyoon/.conda/envs/ml/lib/python3.6/site-packages/torch/utils/data/dataset.py", line 40, in <genexpr>
return tuple(tensor[index] for tensor in self.tensors)
RuntimeError: CUDA error: initialization error
Is using a DataLoader when all my data is loaded on GPU?
Is the error an intended feature of Pytorch? |
st179832 | When I write a custom data loader which simply batches through the TensorDatasest, everything is fine.
Then I guess the problem is the multiprocessing, so I tried num_workers=0 which disables multiprocessing in the original DataLoader.
Now it works. |
st179833 | If you would like to use multiple workers in your DataLoader, pass the data tensor as a CPU tensor to TensorDataset and push each batch to the GPU using:
ds = TensorDataset(torch.from_numpy(data))
dl = DataLoader(ds, batch_size=1, num_workers=1, shuffle=True)
for x in dl:
x = x.to('cuda', non_blocking=True)
Otherwise multiple CUDA contexts will be initialized yielding your error. |
st179834 | @ptrblck Thanks for the reply.
Which way would you recommend in terms of performance? The reason why I’m putting all my data to GPU first is to increase the speed.
When using non_blocking=True, is it okay not to use pin_memory=True in DataLoader? The torch.Tensor.cuda() doc 40 says non_blocking is effective when the data is in pin memory. |
st179835 | If your data is already on the GPU, you won’t really need multiple workers, since most likely you are just slicing the data and passing the the model (or passing all the data at once). However, this approach uses some of your limited GPU memory, which might be used for e.g. a bigger model etc.
The op will be non_blocking, if pin_memory was set to True, so you should do it. I’ve missed that part in my code snippet so thanks for pointing it out. |
st179836 | The above is exactly the same issue I’m running into now. I’ve got a working version that loads the data into gpu memory without cuda calls (or at least without causing this issue) via pyarrow.read_parquet and it’s 2.3x faster with 4 workers than it is with 0, despite all of the data being in pinned gpu memory.
Following the instructions for multiprocessing best practices 70 I’ve tried to set torch’s multiprocessing start method to spawn (I’ve also tried forkserver) but when I do this I run into invalid device pointer: 0x7f8a8c000000 at /pytorch/aten/src/THC/THCCachingAllocator.cpp:301 exceptions.
If the data is already on GPU I’d expect 0 workers would be fast, but there’s a big performance hit, so much so that it’s not really worth doing from what I can see. I’m relatively new to multiprocessing so there’s a good chance I’m doing something wrong, and it may just make sense to stick to CPU dataloading to keep the GPU memory available for the model, but I’d love to figure out a more robust solution, particularly in light of in GPU preprocessing options like RAPIDS. |
st179837 | Even_Oldridge:
s not really worth doing from what I can see. I’m relatively new to multiprocessing so there’s a good chance I’m doing something wrong, and it may just make sense to stick to CPU dataloading to keep the GPU memory available for the model, but I’d love to figure out a more robust solution, particularly in light of in GPU prepro
It doesn’t seem to me that the data will be pre-loaded to the GPU. Also if the next operation depends on the data this doesn’t really gain any performance. Also if we’re moving data back to the CPU the data-loading isn’t overlapped. Is there any way to pre-load the data in the main CUDA context? |
st179838 | The easiest way to preload all data on GPU is by simply copying it there (Tensor.cuda()) and maintaining a Python list with all samples that you want to process. Then, instead of iterating over a dataset, you iterate over a Python list with pre-existing CUDA tensors. The reasons these multiprocessing data loaders exist are 1) datasets are typically much larger than a single GPU can hold resident in memory, 2) a single CPU cannot preprocess enough examples per second to saturate GPU throughput. |
st179839 | hi, i am new to distributeddataparallel, but i just find almost all the example code show in pytorch save the rank 0 model, i just want to know do we need to save all the model if we do not sync bn parameters in our model ? so, each rank seems to have different model, if bn parameters is not sync. but we often use all the rank for inference. so, if we want to get the sample inference result as we use all the gpu for inference , should we save all the model of each rank and load all the model then inference ? hope help! thanks very much !!! |
st179840 | Yes, that’s correct. Either you only run inference on the model on rank 0, or you explicitly replicate the BN stats from rank 0 to the other ranks prior to inference. The model weights should be identical across processes and only the BN stats should be different. |
st179841 | i found that only use the rank 0 model trained with distributeddataparallel to inference on val dataset , performance is not as good as use all the model trained with distributeddataparallel to inference on val dataset, only use rank 0 model usually get 0.5% to 1% accuracy slower in classification task. so do we need to allreduce the model in the training , for example, we can allreduce the model begin or after every epoch? hope for your detailed kind reply . Thanks very much. |
st179842 | How do you do validation with a model wrapped with DistributedDataParallel? I imagine you call .eval() everywhere and have every worker validate a subset of your validation set, followed by allreduce of the accuracies. Is this correct?
Another piece of information that might be useful here: if you construct DistributedDataParallel with broadcast_buffers=True then all ranks will receive a copy of the registered buffers prior to executing a forward pass. With respect to batch normalization this means they all use the running stats from rank 0. |
st179843 | Yes, i use every rank to validate a subset of my validation set, it is a default setting in distributed distributeddataparallel when we define the dataset loader. Anyhow it is not an important point.
As the default setting of broadcast_buffers is True. so batch normalization is only calculated in rank 0 and every other rank shared the same batch normalization statistic in rank 0.
but in fact accuracy is different. (1)when i only save the model in rank 0 and all the other ranks load rank0’s model to do inference accuracy is 75%. (2) each rank save its own model and load its own model to do inference accuracy can be 75.8%.
since model.eval() will freeze the bn parameter.
i think the most difference might be : (1) each rank’s model is not same. they share different model, such as bn parameters at least.
(2)Furthermore, in the backward process, different bn parameters might cause the gradient calculated different in every layer, i am not sure whether the gradient is calculated in every each rank or they only broadcast the gradient calculated in rank 0.
so, what is the truth ? Thank you very much. |
st179844 | The only difference I can imagine is that the BN parameters don’t get replicated from rank 0 to the other ranks after the final call to backward, since they are frozen for evaluation after that. I highly doubt that can be the cause of a 0.8% accuracy difference though. What is your approach for aggregating the accuracy numbers?
Regarding (2): the gradient is computed in every process and then averaged across them. This means that if you start with identically initialized model weights and use the same optimizer in all processes, the optimizer will take the same step everywhere. |
st179845 | Thanks for your kind reply.
(1)Yes, i am sure that get 0.8% accuracy difference, if you run the pytorch / example code for imagenet, you will get the same truth, that difference definitely exists, some time the difference is small 0.1%, some time you may get 0.3% or larger difference between accuracy. Difference seed and num_worker might cause different gap between the two method i am not sure for the truth.
(2)By the way, i am not able to solve the reproducible problem for pytorch with distributeddataparallel. i have try to follow the setting with cudnn.benchmark=False cudnn.deterministic=True, and set the torch seed and cuda seed for each rank and dataloader worker_init_fn ( numpy seed and random.seed ) for dataloader and fix the num_workers to a fixed number every time runing my code, but result always different. how can we reproduce our own experiments with the same setting.
(3)As for the accuracy, each rank save the index for error file and write to a json file. at last i summary all the json file to get the accuracy. this is an accurate method. i have also follow the dist.all_reduce method to get a similar result. in both case, difference quite exists and some time quite be unexpected.
so, i do really think sync bn parameters is quite important for some tasks, when we can not shared a big batch size.
(4) As for the difference between our model in each rank , should we all_reduce all model for each rank right after each epoch, so bn parameter get all_reduce for average. this may not cause too much time when we run in multi-node. |
st179846 | Regarding the random seed, do you also call torch.manual_seed()? There are a few that you have to initialize unfortunately.
Regarding the BN stats sync. If you use DDP with broadcast_buffers=True then it will replicate the BN statistics before every forward pass and make sure all ranks have a copy of the stats used by rank 0. The models must be identical after this step happens. You could try to confirm this by passing an identical input to the model and verify they produce an identical output. |
st179847 | so, how can i make sure each rank share the same model when i use the DDP.
to be specific, do i need to set the torch seed to be same for each rank, so the initialized model will be same for each rank at the beginning?
it seems that each rank display different output .
another problem is how can i make the experiments reproducible ? thanks very much. |
st179848 | The initialized model is broadcast from rank 0 to the other processes from the DDP constructor, so the initialized parameters will be identical when you first call forward.
Numerical reproducibility depends on determinism on the inputs/transforms, deterministic partitioning of the dataset, as well as deterministic execution of your model (e.g. there are modes where cuDNN does not execute deterministically). If at any of these you have some non-determinism, numerical reproducibility is impossible. |
st179849 | Hey all,
Why don’t you take a look at SyncBatchNorm 20.
From the link: For a network, already initialized, that has any BatchNormalization layers you can do :
sync_bn_network = torch.nn.utils.convert_sync_batchnorm(network, process_group) |
st179850 | shubhvachher:
twork = torch.nn.utils.convert_sync_batchnorm(networ
Thanks a lot, since this new layer in pytorch1.1 operation is quite slow when apply to multi-node distributed training, so i do not plan to do so. speed can slow down to 2 times longer when i training on 4 node with each node 4 gpus. multi-node gpu communication is an bottleneck in my case. |
st179851 | oh! Thats interesting. I’m using the SyncBatchNorm layer currently in single node 8GPU training.
Did you try to implement your own BN synchronization code? I see that you had an idea here
weiwei:
so do we need to allreduce the model in the training
Did you implement synchronization? Also, were you successful in getting your accuracy up?
weiwei:
(2)Furthermore, in the backward process, different bn parameters might cause the gradient calculated different in every layer, i am not sure whether the gradient is calculated in every each rank or they only broadcast the gradient calculated in rank 0.
Also, did you test the above? If you did, what were the results? |
st179852 | Kakao Brain announces torchgpipe 33, an implementation of GPipe 9 in PyTorch as a handy library.
from torchgpipe import GPipe
model = nn.Sequential(a, b, c, d)
model = GPipe(model, balance=[1, 1, 1, 1], chunks=8)
output = model(input)
GPipe is a scalable pipeline parallelism library published by Google Brain. It leverages the training of a giant model which requires much memory. For instance, Google trained AmoebaNet-B with 557M parameters over GPipe.
GitHub
kakaobrain/torchgpipe 33
A GPipe implementation in PyTorch. Contribute to kakaobrain/torchgpipe development by creating an account on GitHub. |
st179853 | Hi, I’m trying to use torchgpipe on some other models. But the training time increased with GPipe. And I can’t reproduce the paper’s result with torchgpipe’s example resnet101. I think I might measure the training time in a wrong way. How did you measure the training time of resnet101 on GPipe? Thanks ahead! |
st179854 | I would expect the training time to take a hit, because you’re moving much more data around compared to a direct forward/backward. All of that overhead will come at a performance penalty. If I understand correctly, pipelining with this approach is best suited for allowing extremely large models to train if you have limited memory available. |
st179855 | Hi, thank you for your question.
There are some conditions to optimize a model with GPipe:
The model requires a large amount of memory.
The original batch size is not so small. Because we need a micro-batch which is not too small. If a micro-batch is too small, GPU wouldn’t be utilized.
Well balanced. The imbalance between partitions makes GPipe underutilized.
I just published my ResNet-101 performance benchmark 19. If you have the same environment with me, I expect you get the same result. Even you don’t have the same environment, the code will be helpful. Especially, you can check the balance of ResNet-101 what I’ve used, and how I measured the training time. |
st179856 | Yeah, I totally agreed. I guess this approach only suits some models. It’s just in the paper they achieved really great result on AmoebaNet, in terms of both memory usage and training time, which made me doubt my result. |
st179857 | Thanks a lot for your expaination! My GPU memory is too small for a large batch size which limits my tests. I guess the main purpose of GPipe is to enable us to train extremely large model not to accelerate the training process.
I noticed that in the your experiments, pipeline methods use different batch size. Isn’t this going to affect the model performance? |
st179858 | Good question. Yes, it affects. My experiment reproduces a performance benchmark in the original paper. The benchmark also uses adjusted batch size to maximize throughput regardless of the model accuracy.
In our experiments, we mea- sured the effects of pipeline parallelism and recomputation on the model throughput of ResNet-101 and AmoebaNet-D (4, 512). We fixed the image size at 224 × 224. We adjusted the mini-batch size to maximize the throughput. |
st179859 | I just released v0.0.2 of torchgpipe with the detailed documentation 29. This version includes the automatic balancing. |
st179860 | I want to finetuning model by replacing only the last linear layer.
I could do that when I used DataParallel module as below
model = nn.DataParallel(model)
...
model.load_state_dict(checkpoint['state_dict'])
...
model.module.fc = torch.nn.Linear(model.module.fc.in_features,
opt.n_finetune_classes)
model.module.fc = model.module.fc.cuda()
In case of DDP, how can I do that?
according to the warning in the document 4, it seems that I cannot change parameters after I load the checkpoint.
This module assumes all parameters are registered in the model by the time it is created. No parameters should be added nor removed later. Same applies to buffers.
Should I change the linear layer first and load the parameters without the linear layer?
Is that a right way? |
st179861 | Yes, try modifying the module first, and once you’re done, wrapping it in nn.DataParallel. |
st179862 | Thanks @pietern. How can I load some part of parameters after wrapping with nn.parallel.DistributedDataParallel? |
st179863 | It is easier to load the parameters prior to wrapping with DDP. You can save/load the wrapped model as well, but then you can no longer use it without DDP. |
st179864 | Hi, is there any interactive way to debugin distributed launch?
In pytorch 0.4.1, I use pdb in dataparallel, however, it seems that distributed launch would split the process into multiple copies. When I type sth in pdb, my input would also be split to different copies of processes, that’s not what I expected.
So I’m wondering if there is any method that can help me do interactive debugging in distributed launch? |
st179865 | This happens because all processes share the same input file descriptor. When you type a character, the first process who reads it will get it. This makes interactive debugging almost impossible. What you can try, in lieu of a a proper solution, is close the input descriptor by running sys.stdin.close() on the ranks where you don’t want to run pdb. |
st179866 | I want to load snapshot from a file on one of the machines running in a distributed setting. From what I see, optimizers aren’t broadcast among machines in such a case. Is there any easy way to do it? |
st179867 | There is no common way to expose optimizer state AFAIK. If you know how you can access the state of your optimizer then you’ll be able to synchronize it by using torch.distributed collectives directly, e.g. broadcast. |