id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st179868 | Hi all,
I’m just a newbie to PyTorch and struggling for PyTorch distributed training. Currently, I’m trying to implement a GAN like training strategy. The training consists of two stages:
Fix task network, train discrinmator, my workflow is as following:
src_data -> T() ->detach()-> D() -> loss(src_pred, src_label)
tgt_data -> T()->detach()->D()->loss(tgt_pred, tgt_label)
Fix discrinmator, train task network, my workflow is as following:
src_data->T()->supervised_loss
tgt_data->T()->D()->-1*loss(tgt_pred, tgt_label)
The task network T() and discriminator network D() are both wrapped in DDP and they are placed in different process group. The task network is trained with supervised loss with labeled data and finetuned by the adversarial loss with unlabeled data.
For this setting I have 2 questions:
Is it the correct way to combine two DDP models? Or do I have to warp them into one single module first and then place them under DDP?
During training process of task network, I have to fix discriminator’s parameters. Now I just set the requires_grad of all parameters in discrinmator as False and turn them back to True after the loss.backward() is called. Is there anything else to be changed? I found DDP doesn’t allow unused parameters now, but it seems okay to use a module which doesn’t require gradients entirely. Do I do it in a correct way?
I’ll appreciate if there’s somebody could tell me what’s the best practice of implementing multiple models for adversarial training. Thanks in advance! |
st179869 | I have opened a discussion here about a similar question regarding two DDP modules in a GAN setting Calling DistributedDataParallel on multiple Modules? 107. I’m still trying to determine if one process group can suffice, but it seems like the safest course of action is to use separate groups for G and D.
Regarding setting requires_grad to False on D while back propagating G’s loss, I have been meaning to implement that same thing but never got around to it. It seems like the logical approach, as it is just wasting compute time calculating gradients for D when they are going to be discarded. |
st179870 | mdlockyer:
Regarding setting requires_grad to False on D while back propagating G’s loss, I have been meaning to implement that same thing but never got around to it. It seems like the logical approach, as it is just wasting compute time calculating gradients for D when they are going to be discarded.
I think that doing exactly that would make it work with a single process group. Because you no longer race the allreduce calls from the two models. Also, I think you could put the discriminator in eval mode when doing this, which side steps some of the synchronization code paths in DistributedDataParallel. |
st179871 | Hi!
I’m implementing DistributedDataParallel in my code. However, when I start it, if I use PyTorch’s lanch module, one task will start training before the others have begun. This is different from without using PyTorch’s launch module, when I’ll see the processes wait on each other before starting the next epoch, etc.
I’m using an implementation that mirrors this Medium article 5. I’ve been struggling with this issue for two days now, so any help would be extremely appreciated!
Thanks! |
st179872 | When torch.nn.parallel.DistributedDataParallel is initialized with the right distributed context then every iteration should happen in lock step between all processes. If a single process starts going by itself, I think there is something missing in initialization.
Can you share a code snippet how you initialize all of them? |
st179873 | Hello,
I’m using a RTX 2080 ti and a GTX 1050 ti in a two node cluster using pytorch. The problem cames when I execute it (distributed) but both of them take the same time solving MNIST. There are no sync points. Can anyone help me? |
st179874 | My cuda version is 10.0 in RTX and 9.2 in GTX. Im using pytorch 1.2 with mpi 3.1 |
st179875 | Are you using DDP?
If so, the slower card might sync the faster one.
Or are you profiling the cards separately? If so, your code might have other bottlenecks (e.g. data loading). Have you profiled it? |
st179876 | We don’t guarantee compatibility between different versions of PyTorch. You say you have one version compiled against CUDA 10 and another against CUDA 9.2. This might work, but YMMV. |
st179877 | Yes, im using DPP, but im using asynchronous all_reduce to average gradients, so theres no synchronization if I dont make explicit a.wait() (which im not doing just to test), right? In that case, training times still the same which makes no sense for me. Am I losing anything? |
st179878 | Hi,
I am using a loss function that contains the gradient of the output w.r.t. to the input of the network, which I obtained with autograd.grad.
I am interested in training my model in parallel using the DistributedDataParallel container. However, as one WARNING in the doc page mentions, DistributedDataParallel does not support autograd.grad. If I understand correctly, this is because local model parameters (not the one averaged across devices) will be used if I use autograd.grad after the forward call. Of course, this is incorrect.
Looking into the implementation of DistributedDataParallel, I found the method _sync_params 11 is called at the beginning of the forward method to sync params across devices. My question is:
Is it OK for me to call _sync_params 11 once more before I use autograd.grad to compute the gradient of the output w.r.t. to the input and then use it in my loss function? In such, the gradient computation will use the averaged parameters. Is there any caveats? |
st179879 | The problem here is that DistributedDataParallel performs gradient averaging across processes by hooking into the AccumulateGrad 15 function. This allows for performing averaging for the last most gradients while autograd is still running.
Would it be possible for you to first compute the initial loss, call autograd.backward instead of autograd.grad, and have it accumulate the first order gradients in the model parameters? Then you could detach those and compute something else before letting the optimizer do its thing. If not, then you’ll have to perform your own averaging, I think. |
st179880 | Hello,
As I see in the code, there is a Queue used by background threads in order to communicate the parameters with Allreduce. My question is how this parameters are updated. Allreduce is a blocking collective so the background threads will be waiting until all of them enqueue their parameters. So, maybe there is the possibility that parameters arent updated so a process isnt taking into account sometimes the parameters of the others process which have different data. Am I right? Does this affect to the precision ? How the parameter update work then? Whats the point of using a Queue? |
st179881 | Which code were you looking at?
In version 1.0 the distributed backend has been updated such that all collectives run asynchronously w.r.t. the main thread, even if they are blocking. For MPI collectives this means they are run on a single background thread. The queue approach was taken in PyTorch before version 1.0. In 1.1 we introduced a new C++ based gradient reduction mechanism (see reducer.cpp 2) that concatenates multiple gradients into a large bucket and performs allreduce against those buckets instead of individual parameters. |
st179882 | Hello, I want to know how the MPI_Allreduce works in asynchronous mode when the gradients are calculated. Suppose we have 3 processes. If the first epoch is finished and only one process have update the gradients, when it takes the gradients from a shared buffer it takes NaN in the SUM of the process that havent finished ?? Im pretty lost here because Allreduce is a blocking primitive but the training doesnt stop for it. |
st179883 | What do you mean by “training doesn’t stop”?
Also, how do you run allreduce in asynchronous mode? The synchronization done by torch.nn.parallel.DistributedDataParallel is done implicitly, when you make autograd compute gradients for your model parameters. It doesn’t return until all the allreduce calls have finished (or in the case of CUDA tensors, until all the NCCL allreduce kernels have been queued). |
st179884 | I am trying to setup distributed training and encountered some problems with initialization of process group.
Since I have a shared file-system between nodes, I chose initialization with file://. But I got this error:
ValueError: Error initializing torch.distributed using file:// rendezvous: rank parameter missing
Then I found in documentation, that “automatic rank assignment is not supported anymore”, although documentation for init_process_group 2 imply otherwise.
Is there a way to not tell init_process_group rank explicitly. And what is the point of init_process_group if I have to pass rank explicitly. |
st179885 | Good point.
This used to be possible and was not reinstated when we moved to c10d for PyTorch 1.0. I created an issue on GitHub to bring this functionality back: https://github.com/pytorch/pytorch/issues/22128 9. |
st179886 | I have a task that every sample have different sizes (or different modules of network to forward), so I can’t put them in batches. But train the samples one by one is very inefficient. How can I paralleling the process?
I think torch.multiprocessing might be one solution. But I’m still not sure how to use it after reading the docs. |
st179887 | A common approach is to pad the inputs to the biggest shape. You still have to make sure that your model works well with padded inputs of course. |
st179888 | I test execution time for these two lines in engine/trainer.py#L64 1. I called it as todevice time
images = images.to(device)
targets = [target.to(device) for target in targets]
I used 2 nodes, each node has 8 GPUs. Each GPU processes 2 images. I run cammand on first host (second host just replace with --node_rank=1):
export NGPUS=8
python -m torch.distributed.launch --nproc_per_node=$NGPUS \
--nnodes=2 --node_rank=0 --master_addr="172.17.61.2" --master_port=22876 \
tools/train_net.py --config-file "configs/e2e_faster_rcnn_R_50_FPN_1x.yaml" MODEL.RPN.FPN_POST_NMS_TOP_N_TRAIN 2000 SOLVER.IMS_PER_BATCH 32 SOLVER.BASE_LR 0.04 SOLVER.STEPS "(30000, 40000)" SOLVER.MAX_ITER 50000 TEST.IMS_PER_BATCH 16 OUTPUT_DIR models/tmp-2n8g
todevice time in 2 nodes(each 8GPUs) is doubled compared to 1 node 8GPUs. I also test other time such as data_time, backbone time, rpn time, backward time, step() update params time. All these time is so closed to one node with 8 GPUs.
I also test 2 nodes with 16GPUs on each. It’s the same that todevice time is twice than one node with 16GPUs, each GPU processes 2 images.
I am very confused. Each gpu processes the same number of images in both 2 situations. But time increased in distributed mode. |
st179889 | That is weird indeed. Can you isolate the problem to data loading (i.e. don’t train a model, just iterate over the data set)? Due to asynchronous nature of CUDA, wall clock time that attribute to data transfer is in fact caused by asynchronous execution of for example autograd, the optimizer, etc. |
st179890 | I use torch.nn.parallel.DistributedDataParallel API in PyTorch1.1 to spawn my multi-card model (2 GPUs) to 8 GPUs. According to official tutorial GETTING STARTED WITH DISTRIBUTED DATA PARALLEL 42, DistributedDataParallel is the recommanded way to parallel one’s model. I am not confident about my implementation and I can’t find other valuable tutorials, so come here for help.
My ideas are simply as follow:
split my 3D CNN model into 2 GPUs (simply called dev_in and dev_out),
use DistributedDataParallel() to spawn my 2-GPUs model to 4 Processes, each model replica using same random seed to initialize weights, and each Process don’t share GPUs with other Processes.
wrap my dataset with Dataset() and DataLoader() api, and manually separate one batch’s data equally in batch-dim, so each Process (with 2 GPUs) will process different data with SAME weights.
after forward propagate in each Process, collect loss value in each Process and average them, then using this averaged loss value to get gradients and update All 4 models in 4 Processes,
after each epoch of training and validation, calculate ACC and AUC scores for training dataset and validation dataset respectively.
after one epoch of training dataset, using validation dataset to validate my my model, currently I use ONE PROCESS model (before warpped by DistributedDataParallel() API to start my validation, because there is something wrong I can’t deal with when I used model after DistributedDataParallel())
Currently, here is my code related:
# sample/train.py
import tempfile
import torch.distributed as dist
import torch.nn as nn
from torch import optim
import torch.multiprocessing as mp
from torch.nn.parallel import DistributedDataParallel
from torch.distributed import Backend
from time import time
from sample.networks.XXXXNet import XXXXNet
from sample.data import XXXXDataSet
from torch.utils.data import DataLoader
import yaml
import torch
import os
from yaml import CLoader as Loader
from sklearn.metrics import accuracy_score, auc, roc_curve
import numpy as np
from torch.utils.tensorboard import SummaryWriter
cfg = yaml.load(
open(os.path.join(os.path.abspath(
os.path.join(os.path.dirname(__file__), "../config/config.yml")))),
Loader=Loader
)["DATASET"][0]
np.random.seed(cfg["SEED"])
torch.random.manual_seed(cfg["SEED"])
tempfile.tempdir = os.path.abspath("~/tmp")
NAME = "%dGPUs_1e-6" % cfg["WORLD_SIZE"]
writer = SummaryWriter(log_dir=os.path.join(os.path.dirname(__file__), "logs/tb_logs/%s" % NAME))
def setup_env(rank, world_size):
"""
Initialize the distributed environment.
:param rank: Rank of the current process.
:param world_size: Number of processes participating in the job.
:return:
"""
assert isinstance(world_size, int) and world_size > 0
assert isinstance(rank, int) and 0 <= rank < world_size
os.environ['MASTER_ADDR'] = cfg["MASTER_ADDR"]
os.environ['MASTER_PORT'] = cfg["MASTER_PORT"]
# Initialize the process group
dist.init_process_group(Backend.NCCL, rank=rank, world_size=world_size)
# Explicitly setting seed to make sure that models created in two processes
# start from same random weights and biases.
torch.manual_seed(cfg["SEED"])
def cleanup_env():
"""
Destroy the default process group.
:return:
"""
dist.destroy_process_group()
def train_model(rank, world_size, offset=1, ):
"""
Training model.
:param rank: Rank of the current process.
:param world_size: The number of processes in the current process group.
:param offset: The index of first GPU to use.
:return:
"""
assert isinstance(world_size, int) and world_size > 0
assert isinstance(rank, int) and 0 <= rank < world_size
assert isinstance(offset, int) and offset >= 0
setup_env(rank, world_size)
# Setup mp_model and devices for this process
dev_in = rank * 2 + offset
dev_out = rank * 2 + 1 + offset
mp_model = XXXXNet(dev_in=dev_in, dev_out=dev_out)
ddp_mp_model = DistributedDataParallel(mp_model)
loss_fn = nn.CrossEntropyLoss()
optimizer = optim.Adam(
ddp_mp_model.parameters(),
lr=float(cfg["LEARNING_RATE"]) * world_size,
weight_decay=float(cfg["L2"]),
)
old_lr = float(cfg["LEARNING_RATE"]) * world_size
batch_size = world_size * cfg["BATCH_SIZE_PER_CARD"]
# Training dataset
dataset_train = XXXXDataSet(val=False, shape=cfg["CUBE_SIZE"][1:])
data_loader_train = DataLoader(dataset_train, batch_size=batch_size, num_workers=cfg["NUM_WORKERS"])
# Validation dataset
dataset_val = XXXXDataSet(val=True, shape=cfg["CUBE_SIZE"][1:])
data_loader_val = DataLoader(dataset_val, batch_size=cfg["BATCH_SIZE_PER_CARD"], num_workers=cfg["NUM_WORKERS"])
with open(os.path.join(os.path.dirname(__file__), "logs/" + NAME + ".log"), "w") as log_file:
def _print(string, file=log_file, target_rank=0):
"""
Print to cmd and log file simultaneously.
:param string: Content need to print.
:param file: Log file object.
:return:
"""
if target_rank == -1:
print(string, file=file)
print(string)
file.flush()
elif target_rank == rank:
print(string, file=file)
print(string)
file.flush()
_print(str(cfg))
no_optim = 0
total_epoch = cfg["EPOCHS"]
epoch_best_loss_train = 100.
epoch_best_loss_val = 100.
for epoch in range(1, total_epoch + 1):
# ==TRAINING====TRAINING====TRAINING====TRAINING====TRAINING====TRAINING==
tic_train = time()
# =====ACC&AUC start=====
prods_train, gts_train = [], []
# ======ACC&AUC end======
data_loader_iter_train = iter(data_loader_train)
train_epoch_loss = 0
for img, label in data_loader_iter_train:
inp = img[rank * cfg["BATCH_SIZE_PER_CARD"]: (rank + 1) * cfg["BATCH_SIZE_PER_CARD"]]
label = label[rank * cfg["BATCH_SIZE_PER_CARD"]: (rank + 1) * cfg["BATCH_SIZE_PER_CARD"]].to(dev_out)
# Calculate loss
if inp.size()[0] < 2:
_print("inp is None!!!!!!!!!!!!", target_rank=-1)
train_loss = torch.tensor(0.)
else:
optimizer.zero_grad()
pred = ddp_mp_model(inp)
train_loss = loss_fn(pred, label)
train_loss_lst = [torch.zeros_like(train_loss)] * world_size
prods_train_lst = [torch.zeros_like(pred)] * world_size
label_train_lst = [torch.zeros_like(label)] * world_size
dist.all_gather(prods_train_lst, pred) # Sync between all processes
dist.all_gather(label_train_lst, label) # Sync between all processes
dist.all_gather(train_loss_lst, train_loss) # Sync between all processes
dist.all_reduce(train_loss, op=dist.ReduceOp.SUM) # Sync between all processes
train_loss /= torch.tensor(train_loss_lst).nonzero().size(0)
# Backward propagate and update weights
train_loss.backward()
optimizer.step()
train_epoch_loss += train_loss.item()
# =====ACC&AUC start=====
prods_train.append(torch.cat(prods_train_lst, dim=0).cpu().detach().numpy())
gts_train.append(torch.cat(label_train_lst, dim=0).cpu().numpy())
prods_train = np.concatenate(tuple(prods_train))
gts_train = np.concatenate(tuple(gts_train))
prods_train = prods_train[:, 1]
prods_01 = np.where(prods_train > 0.5, 1, 0) # Turn probability to 0-1 binary output
acc_NN = accuracy_score(gts_train, prods_01)
false_positive_rate, recall, thresholds = roc_curve(gts_train, prods_train, pos_label=1)
roc_auc = auc(false_positive_rate, recall)
# ======ACC&AUC end======
train_epoch_loss /= len(data_loader_iter_train)
_print("******************************")
_print("epoch[%03d/%03d], time: %02dm:%02ds" %
(epoch, cfg["EPOCHS"], int(time() - tic_train) // 60, int(time() - tic_train) % 60))
_print("train loss = %6.4f" % train_epoch_loss)
_print("CUBE_SIZE: %s" % str(cfg["CUBE_SIZE"]))
_print("ACC = %6.4f, AUC = %6.4f" % (acc_NN, roc_auc))
# ==Validation====Validation====Validation====Validation====Validation====Validation==
_print("------------------------------")
mp_model.eval()
tic_val = time()
# =====code for ACC&AUC start=====
prods_val = []
gts_val = []
# ======code for ACC&AUC end======
data_loader_iter_val = iter(data_loader_val)
val_epoch_loss = 0
with torch.no_grad():
for val_img, val_label in data_loader_iter_val:
val_label = val_label.to(dev_out)
# Calculate predicts and loss
val_pred = ddp_mp_model(val_img)
val_loss = loss_fn(val_pred, val_label)
val_epoch_loss += val_loss.item()
# =====code for ACC&AUC start=====
val_pred = val_pred.cpu().detach().numpy()
val_label = val_label.cpu().numpy()
prods_val.append(val_pred)
gts_val.append(val_label)
prods_val = np.concatenate(tuple(prods_val))
gts_val = np.concatenate(tuple(gts_val))
prods_val = prods_val[:, 1]
prods_01_val = np.where(prods_val > 0.5, 1, 0) # Turn probability to 0-1 binary output
acc_NN_val = accuracy_score(gts_val, prods_01_val)
false_positive_rate_val, recall_val, thresholds_val = roc_curve(gts_val, prods_val, pos_label=1)
roc_auc_val = auc(false_positive_rate_val, recall_val)
# ======code for ACC&AUC end======
val_epoch_loss /= len(data_loader_iter_val)
_print("validation time: %02dm:%02ds" % (int(time() - tic_val) // 60, int(time() - tic_val) % 60))
_print("validation loss = %6.4f" % val_epoch_loss)
_print("validation ACC = %6.4f, validation AUC = %6.4f" % (acc_NN_val, roc_auc_val))
if rank == 0:
writer.add_scalars(main_tag="lr", tag_scalar_dict={"train": old_lr}, global_step=epoch)
writer.add_scalars(main_tag="time",
tag_scalar_dict={"train": time() - tic_train,
"val": time() - tic_val}, global_step=epoch)
writer.add_scalars(main_tag="loss",
tag_scalar_dict={"train": train_epoch_loss,
"val": val_epoch_loss}, global_step=epoch)
writer.add_scalars(main_tag="ACC",
tag_scalar_dict={"train": acc_NN,
"val": acc_NN_val}, global_step=epoch)
writer.add_scalars(main_tag="AUC",
tag_scalar_dict={"train": roc_auc,
"val": roc_auc_val}, global_step=epoch)
mp_model.train()
# ==Validation End====Validation End====Validation End====Validation End====Validation End==
if train_epoch_loss >= epoch_best_loss_train:
no_optim += 1
else:
no_optim = 0
epoch_best_loss_train = train_epoch_loss
torch.save(ddp_mp_model.state_dict(),
os.path.join(os.path.dirname(__file__), "weights/" + NAME + ".th"))
if no_optim > 6:
_print("early stop at [%03d] epoch" % epoch)
break
if no_optim > 3:
if old_lr < 5e-7:
break
ddp_mp_model.load_state_dict(torch.load(
os.path.join(os.path.dirname(__file__), "weights/" + NAME + ".th")))
new_lr = old_lr / 5.0
for param_group in optimizer.param_groups:
param_group['lr'] = new_lr
_print("update learning rate: %f -> %f" % (old_lr, new_lr))
old_lr = new_lr
_print("******************************")
_print("Finish!")
cleanup_env()
def ddp_train(demo_fn, world_size):
"""
:param demo_fn: Function.
:param world_size: The number of processes in the current process group.
:return:
"""
mp.spawn(demo_fn,
args=(world_size,),
nprocs=world_size,
join=True)
if __name__ == '__main__':
ddp_train(
train_model,
world_size=cfg["WORLD_SIZE"],
)
writer.close()
And here is another .py file
# sample/networks/XXXXNet.py
import yaml
from yaml import CLoader as Loader
import torch.nn.functional as F
import os
import torch
import torch.distributed as dist
import torch.nn as nn
from torch.distributed import Backend
cfg = yaml.load(
open(os.path.join(os.path.abspath(
os.path.join(os.path.dirname(__file__), "../../config/config.yml")))),
Loader=Loader
)["DATASET"][0]
non_linearity = nn.LeakyReLU
class FireModule3D(nn.Module):
"""
FireModule3D module
(Tested 5.10)
"""
def __init__(self, in_channels, out_channels, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True):
"""
Init function
:param in_channels: Number of input channels.
:param out_channels: Number of output channels.
:param kernel_size: Kernel size.
:param dilation: Dilation rate of dilated convolution.
:param bias: Whether to use bias.
:param squeeze_ratio: Squeeze ratio of Fire Module.
:param pct_3x3: Percent of 3x3 convolution in expand layer.
:param activation: Activation function.
:param use_bn: Whether to use batch normalization.
:param momentum: The value used for the running_mean and running_var computation.
:param use_dp: Whether to use dropout.
:param use_bypass: Whether to use bypass connection.
"""
super(FireModule3D, self).__init__()
self.use_bn = use_bn
self.use_dp = use_dp
self.use_bypass = use_bypass
e_i = out_channels
s_1x1 = int(squeeze_ratio * e_i) # number of channels in squeeze 1x1 layer
e_3x3 = int(pct_3x3 * e_i) # number of channels in expand 3x3 layer
e_1x1 = e_i - e_3x3
self.activation = activation(inplace=True)
self.squeeze1x1 = nn.Conv3d(in_channels=in_channels, out_channels=s_1x1,
kernel_size=1, dilation=1, groups=1, bias=bias)
self.expand1x1 = nn.Conv3d(in_channels=s_1x1, out_channels=e_1x1,
kernel_size=1, dilation=1, groups=1, bias=bias)
self.expand3x3 = nn.Conv3d(in_channels=s_1x1, out_channels=e_3x3, kernel_size=kernel_size,
padding=1, dilation=dilation, bias=bias)
# Bypass connection
if self.use_bypass:
if in_channels != out_channels:
self.bypass = nn.Conv3d(in_channels=in_channels, out_channels=out_channels,
kernel_size=1, bias=bias)
else:
self.bypass = None
if self.use_bn:
self.bn_s1x1 = nn.BatchNorm3d(num_features=s_1x1, momentum=momentum)
self.bn_e1x1 = nn.BatchNorm3d(num_features=e_1x1, momentum=momentum)
self.bn_e3x3 = nn.BatchNorm3d(num_features=e_3x3, momentum=momentum)
if self.use_dp:
self.dp = nn.Dropout2d(0.5)
def forward(self, x):
"""
Forward computation function.
:param x: Input tensor.
:return: Result tensor.
"""
# Squeeze 1x1 layer
squeeze = self.squeeze1x1(x)
if self.use_bn:
squeeze = self.bn_s1x1(squeeze)
squeeze = self.activation(squeeze)
# Expand 1x1 layer
expand1x1 = self.expand1x1(squeeze)
if self.use_dp:
expand1x1 = self.dp(expand1x1)
if self.use_bn:
expand1x1 = self.bn_e1x1(expand1x1)
# Expand 3x3 layer
expand3x3 = self.expand3x3(squeeze)
if self.use_dp:
expand3x3 = self.dp(expand3x3)
if self.use_bn:
expand3x3 = self.bn_e3x3(expand3x3)
merge = self.activation(torch.cat([expand1x1, expand3x3], dim=1))
if self.use_bypass: # Bypass connection
if self.bypass is not None:
x = self.bypass(x)
merge = merge + x
return merge
class XXXXNet(nn.Module):
def __init__(self, nb_class=2, dev_in=None, dev_out=None):
super(XXXXNet, self).__init__()
self.device1 = dev_in
self.device2 = dev_out
self.conv0 = nn.Sequential(nn.Conv3d(1, 8, kernel_size=7, stride=2, padding=3, bias=False),
non_linearity(inplace=True)).to(self.device1)
self.conv1 = FireModule3D(in_channels=8, out_channels=8, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device1)
self.conv2 = FireModule3D(in_channels=8, out_channels=8, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device1)
self.conv3 = FireModule3D(in_channels=8, out_channels=8, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device1)
self.conv4 = FireModule3D(in_channels=8, out_channels=16, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.mp1 = nn.MaxPool3d(kernel_size=2, stride=2).to(self.device2)
self.conv5 = FireModule3D(in_channels=16, out_channels=16, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv6 = FireModule3D(in_channels=16, out_channels=16, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv7 = FireModule3D(in_channels=16, out_channels=16, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv8 = FireModule3D(in_channels=16, out_channels=32, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.mp2 = nn.MaxPool3d(kernel_size=2, stride=2).to(self.device2)
self.conv9 = FireModule3D(in_channels=32, out_channels=32, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv10 = FireModule3D(in_channels=32, out_channels=32, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv11 = FireModule3D(in_channels=32, out_channels=32, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv12 = FireModule3D(in_channels=32, out_channels=64, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.mp3 = nn.MaxPool3d(kernel_size=2, stride=2).to(self.device2)
self.conv13 = FireModule3D(in_channels=64, out_channels=64, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv14 = FireModule3D(in_channels=64, out_channels=64, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv15 = FireModule3D(in_channels=64, out_channels=64, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv16 = FireModule3D(in_channels=64, out_channels=128, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.mp4 = nn.MaxPool3d(kernel_size=2, stride=2).to(self.device2)
self.conv17 = FireModule3D(in_channels=128, out_channels=128, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv18 = FireModule3D(in_channels=128, out_channels=128, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv19 = FireModule3D(in_channels=128, out_channels=128, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.conv20 = FireModule3D(in_channels=128, out_channels=256, kernel_size=3,
dilation=1, bias=False,
squeeze_ratio=0.125, pct_3x3=0.5, activation=non_linearity,
use_bn=True, momentum=0.1, use_dp=False, use_bypass=True).to(self.device2)
self.mp5 = nn.MaxPool3d(kernel_size=2, stride=2).to(self.device2)
self.fc1 = nn.Sequential(
nn.Linear(256 * 7 * 3 * 5, 256, bias=False),
non_linearity(inplace=True),
nn.Dropout2d(p=0.5),
).to(self.device2)
self.fc3 = nn.Linear(256, nb_class, bias=False).to(self.device2)
def forward(self, x):
x = x.to(self.device1)
x = self.conv0(x)
x = self.conv3(self.conv2(self.conv1(x)))
x = x.to(self.device2)
x = self.mp1(self.conv4(x))
x = self.mp2(self.conv8(self.conv7(self.conv6(self.conv5(x)))))
x = self.mp3(self.conv12(self.conv11(self.conv10(self.conv9(x)))))
x = self.mp4(self.conv16(self.conv15(self.conv14(self.conv13(x)))))
x = self.mp5(self.conv20(self.conv19(self.conv18(self.conv17(x)))))
# flatten
x = torch.flatten(x, start_dim=1)
x = self.fc1(x)
x = F.softmax(self.fc3(x), dim=1) # .squeeze().contiguous()
return x
And my config/config.yml file is
DATASET:
- NAME: "XXXX"
ROOT: "/home/aaa/organized_data"# "/Users/aaa/fsdownload" #
IMAGE_FOLDER: "raw_scans"
NPY_FOLDER: "pre_result"
NPY_FOLDER2: "my_npy" #
TRAIN_CSV: "train.csv"
VAL_CSV: "val.csv"
SEED: 1
BATCH_SIZE_PER_CARD: 2
NUM_WORKERS: 4
MOMENTUM: 0.01
LEARNING_RATE: 1e-6
NUM_CLASSES: 2
WORLD_SIZE: 4
CUBE_SIZE: [1, 450, 220, 325] #(C, D, H, W)
EPOCHS: 100
MASTER_ADDR: "localhost"
MASTER_PORT: "12355"
VIS_PORT: 8097
L2: 5e-3
In my case:
Process0 using GPU1 and GPU2
Process1 using GPU3 and GPU4
Process2 using GPU5 and GPU6
Process3 using GPU7 and GPU8
my server has 10 GPUs and I didn’t use GPU0 and 9.
When I monitored the running process of the program using nvidia-smi, I found that GPU 2, 4, 6, 8 are often unable to complete tasks at the same time, and the GPUs that completed calculation first would wait for the straggler, so my GPU overall usage is low.
I think there are a lot of things in my code that can be improved, so where should I start optimizing my code? Looking forward to any suggestions. |
st179891 | You don’t need to average the loss before calling loss.backward(). The gradients that are computed on each process are reduced across processes, and upon returning from loss.backward() each process has identical gradients for their model parameters.
Regarding the utilization, check out torchgpipe 203. It might be useful here. |
st179892 | I want to use model parallel and data parallel at the same time, and have read many docs and tutorials from official website.
One confusing problem I faced is how to collect all kinds of meter values in each Process?
Question1: In the official tutorial 17, they just record meters value in each Process.
But in my code, I print loss value in each process, they are different. So, I think the value of other meters are also different.
Is that tutorial wrong? In my opinion, I think the right way should synchronize loss, acc and other meters first, then all processes maintain the same values, after that I just need to print meters information in one Process.
Question2: In the official tutorial 3, they say ‘the DistributedDataParallel module also handles the averaging of gradients across the world, so we do not have to explicitly average the gradients in the training step’.
But, because of question1, does the API actually work as what the tutorial said? Because each of the processes has a different loss value, although they start from the same init weights, will model weights in each process be optimized in different directions? |
st179893 | Hi @StuChen,
The losses are different because different processes see different inputs with different labels and therefore produce different losses. If you’re looking for a global top1 or top5, you can use distributed primitives from torch.distributed to average them.
The model in each process will be optimized in the same way, because after calling loss.backward() the resulting gradients are identical across processes. In combination with the initial weights being identical, the resulting weights after optimization are also identical. |
st179894 | Bug
index_add_ (and probably other similar indexing functions like index_copy_, Ps. Not tested) give wrong results when used inside a model which has been wrapped with DataParallel.
Even with DataParallel wrapped model, the forward function which may be using index_add_ for some kind of calculations should work normally as in the case for single GPU.
Refer to the log attached which illustrates the problem.
To Reproduce
Steps to reproduce the behaviour:
Run the below dummy code snippet.
Use 2 GPUs for running(export CUDA_VISIBLE_DEVICES=0,1).
import torch
idx = torch.arange(0,40, device=torch.device('cuda:0'), dtype=torch.long).reshape(4,10)
print("index:", idx.shape)
emb = torch.arange(10,130, dtype=torch.int, device=torch.device('cuda:0')).reshape(4,10,3)
print("t", emb.shape)
print("\n")
class Index_Add_Checker(torch.nn.Module):
def __init__(self, index, t):
super().__init__()
def forward(self, index, t):
index.view(-1)
pooled = torch.zeros(40, 3, dtype=torch.int).cuda()
print("index:", index.shape)
print("t:", t.shape)
pooled.index_add_(0, index.view(-1), t.view(-1,3))
return pooled
model_dp = Index_Add_Checker(idx, emb)
model_dp = torch.nn.DataParallel(model_dp).cuda()
ans_dp = model_dp(idx, emb)
print("ans_dp shape:", ans_dp.shape)
print("ans_dp:", ans_dp)
print("\n=====================================================================\n")
model_without_dp = Index_Add_Checker(idx, emb)
ans = model_without_dp(idx, emb)
print("ans shape:", ans.shape)
print("ans:", ans)
Expected behaviour
Basically, the ans and ans_dp should be same, but ans_dp i.e ans in case of data parallel model doesn’t seem to be correct and something which is not expected out of index_add_.
This is probably happening because DataParallel splits the index and t along batch_first=0 dimension. And when they are used for index_add_ the indices do not line up as expected and hence the problem.
Output Log:
index: torch.Size([4, 10])
t torch.Size([4, 10, 3])
index: torch.Size([2, 10])
t: torch.Size([2, 10, 3])
index: torch.Size([2, 10])
t: torch.Size([2, 10, 3])
ans_dp shape: torch.Size([80, 3])
ans_dp: tensor([[ 10, 11, 12],
[ 13, 14, 15],
[ 16, 17, 18],
[ 19, 20, 21],
[ 22, 23, 24],
[ 25, 26, 27],
[ 28, 29, 30],
[ 31, 32, 33],
[ 34, 35, 36],
[ 37, 38, 39],
[ 40, 41, 42],
[ 43, 44, 45],
[ 46, 47, 48],
[ 49, 50, 51],
[ 52, 53, 54],
[ 55, 56, 57],
[ 58, 59, 60],
[ 61, 62, 63],
[ 64, 65, 66],
[ 67, 68, 69],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 0, 0, 0],
[ 70, 71, 72],
[ 73, 74, 75],
[ 76, 77, 78],
[ 79, 80, 81],
[ 82, 83, 84],
[ 85, 86, 87],
[ 88, 89, 90],
[ 91, 92, 93],
[ 94, 95, 96],
[ 97, 98, 99],
[100, 101, 102],
[103, 104, 105],
[106, 107, 108],
[109, 110, 111],
[112, 113, 114],
[115, 116, 117],
[118, 119, 120],
[121, 122, 123],
[124, 125, 126],
[127, 128, 129]], device='cuda:0', dtype=torch.int32)
=====================================================================
index: torch.Size([4, 10])
t: torch.Size([4, 10, 3])
ans shape: torch.Size([40, 3])
ans: tensor([[ 10, 11, 12],
[ 13, 14, 15],
[ 16, 17, 18],
[ 19, 20, 21],
[ 22, 23, 24],
[ 25, 26, 27],
[ 28, 29, 30],
[ 31, 32, 33],
[ 34, 35, 36],
[ 37, 38, 39],
[ 40, 41, 42],
[ 43, 44, 45],
[ 46, 47, 48],
[ 49, 50, 51],
[ 52, 53, 54],
[ 55, 56, 57],
[ 58, 59, 60],
[ 61, 62, 63],
[ 64, 65, 66],
[ 67, 68, 69],
[ 70, 71, 72],
[ 73, 74, 75],
[ 76, 77, 78],
[ 79, 80, 81],
[ 82, 83, 84],
[ 85, 86, 87],
[ 88, 89, 90],
[ 91, 92, 93],
[ 94, 95, 96],
[ 97, 98, 99],
[100, 101, 102],
[103, 104, 105],
[106, 107, 108],
[109, 110, 111],
[112, 113, 114],
[115, 116, 117],
[118, 119, 120],
[121, 122, 123],
[124, 125, 126],
[127, 128, 129]], device='cuda:0', dtype=torch.int32)
Environment
PyTorch version: 1.1.0
Is debug build: No
CUDA used to build PyTorch: 9.0.176
OS: Ubuntu 16.04.3 LTS
GCC version: (Ubuntu 5.4.0-6ubuntu1~16.04.10) 5.4.0 20160609
CMake version: version 3.5.1
Python version: 3.6
Is CUDA available: Yes
CUDA runtime version: 7.5.17
GPU models and configuration:
GPU 0: GeForce GTX 1080 Ti
GPU 1: GeForce GTX 1080 Ti
GPU 2: GeForce GTX 1080 Ti
GPU 3: GeForce GTX 1080 Ti
GPU 4: GeForce GTX 1080 Ti
GPU 5: GeForce GTX 1080 Ti
GPU 6: GeForce GTX 1080 Ti
GPU 7: GeForce GTX 1080 Ti
Nvidia driver version: 418.39
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.6.0.21
/usr/lib/x86_64-linux-gnu/libcudnn.so.7.5.0
/usr/local/cuda-8.0/targets/x86_64-linux/lib/libcudnn.so.6
/usr/local/cuda-9.0/targets/x86_64-linux/lib/libcudnn.so.7
/usr/local/cuda-9.1/targets/x86_64-linux/lib/libcudnn.so.7.0.5
Versions of relevant libraries:
[pip3] numpy==1.14.0
[pip3] numpydoc==0.7.0
[pip3] torch==1.0.1.post2
[pip3] torchvision==0.2.2.post3
[conda] torch 1.1.0 pypi_0 pypi
[conda] torch-cluster 1.3.0 pypi_0 pypi
[conda] torch-geometric 1.2.0 pypi_0 pypi
[conda] torch-scatter 1.2.0 pypi_0 pypi
[conda] torch-sparse 0.4.0 pypi_0 pypi
[conda] torch-spline-conv 1.1.0 pypi_0 pypi
[conda] torchvision 0.2.2.post3 pypi_0 pypi
[conda] torchviz 0.0.1 pypi_0 pypi |
st179895 | This is a cross-post of https://github.com/pytorch/pytorch/issues/21810 15. If this is a proper issue please continue on GitHub. |
st179896 | Hi! I made a post here because I didn’t get a reply on GitHub issue tracker. 'm new to PyTorch so just wanted to be sure if my post is actually right. |
st179897 | Hello everyone,
Could you guys have a look on my problem.
I have two problems: data loading and DataParallel are not working.
To train densenet121 on 4 GPUs (Tesla V100) I use DataParallel. The code I use for these tests is from here 1 as suggested here. I just customised this code. I even tried with smaller batches as suggested in this tutorial.
Here are my results for parallel training:
with 4 GPUs, 80 batch and 10 epochs: 7m 24s
with 1 GPU, 20 batch and 10 epochs: 5m 26s
Concerning pinning the loaded data to CPU, there is also no changes. In DataLoader I set ‘pin_memory = True’ and in cuda ‘non_blocking = True’.
Thank you for your precious considered time.
Here is my code:
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Thu May 16 15:11:33 2019
"""
import os
import shutil
import time
import torch
import torch.nn as nn
import torch.nn.parallel
import torch.optim
import torch.utils.data
import torch.utils.data.distributed
import torchvision.transforms as transforms
import torchvision.datasets as datasets
import torchvision.models as models
def set_parameter_requires_grad(model, feature_extracting):
if feature_extracting:
for param in model.parameters():
param.requires_grad = False
def cuda_managment(device_count, text = str):
print('\n')
print(text)
for d in range(device_count):
#print('GPU {} allocated memory {}'.format(d, cuda.memory_allocated(d)/1e+9))
print('GPU {} cached memory {}'.format(d, torch.cuda.max_memory_cached(d)/1e+9))
print('\n')
def img_loader(train_or_test, transformers, batch_size, shuffle_data = True):
# create dataset
data = []
for t in transformers:
data.append(datasets.ImageFolder(os.path.join(data_dir, train_or_test), t))
data = torch.utils.data.ConcatDataset(data)
# create dataloader
loader = torch.utils.data.DataLoader(data, batch_size=batch_size, shuffle=shuffle_data,
num_workers=20,
pin_memory = True
)
return loader
def main_worker(num_classes, datadir, batch_size, train_loader, val_loader,
n_workers = 3, evaluate = False, epochs = 100, feature_extract = False):
since = time.time()
best_acc1 = 0.0
# create model
model = models.densenet121(pretrained=False)
set_parameter_requires_grad(model, feature_extract)
num_ftrs = model.classifier.in_features
model.classifier = nn.Linear(num_ftrs, num_classes)
#no parallel traing
# model = model.cuda(0)
# parallel training
model = nn.DataParallel(model).cuda()
# define loss function (criterion) and optimizer
criterion = nn.CrossEntropyLoss().cuda(0)
optimizer = torch.optim.SGD(model.parameters(),
lr = 0.001,
momentum=0.9,
)
if evaluate:
validate(val_loader, model, criterion)
return
for epoch in range(epochs):
cuda_managment(torch.cuda.device_count(), 'CUDA STATE')
adjust_learning_rate(optimizer, epoch)
# train for one epoch
train(train_loader, model, criterion, optimizer, epoch)
# evaluate on validation set
acc1 = validate(val_loader, model, criterion)
# remember best acc@1 and save checkpoint
is_best = acc1 > best_acc1
best_acc1 = max(acc1, best_acc1)
save_checkpoint({
'epoch': epoch + 1,
'state_dict': model.state_dict(),
'best_acc1': best_acc1,
'optimizer' : optimizer.state_dict(),
}, is_best)
time_elapsed = time.time() - since
print('training complete in {:.0f}m {:.0f}s'.format(time_elapsed // 60, time_elapsed % 60))
def train(train_loader, model, criterion, optimizer, epoch):
batch_time = AverageMeter('Time', ':6.3f')
data_time = AverageMeter('Data', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(len(train_loader), batch_time, data_time, losses, top1,
top5, prefix="Epoch: [{}]".format(epoch))
# switch to train mode
model.train()
end = time.time()
for i, (input, target) in enumerate(train_loader):
# measure data loading time
data_time.update(time.time() - end)
input = input.cuda(0, non_blocking = True)
target = target.cuda(0, non_blocking=True)
# input = input.cuda(0)
# target = target.cuda(0)
# compute output
output = model(input)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 2))
losses.update(loss.item(), input.size(0))
top1.update(acc1[0], input.size(0))
top5.update(acc5[0], input.size(0))
# compute gradient and do SGD step
optimizer.zero_grad()
loss.backward()
optimizer.step()
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
progress.print(i)
def validate(val_loader, model, criterion):
batch_time = AverageMeter('Time', ':6.3f')
losses = AverageMeter('Loss', ':.4e')
top1 = AverageMeter('Acc@1', ':6.2f')
top5 = AverageMeter('Acc@5', ':6.2f')
progress = ProgressMeter(len(val_loader), batch_time, losses, top1, top5,
prefix='Test: ')
# switch to evaluate mode
model.eval()
with torch.no_grad():
end = time.time()
for i, (input, target) in enumerate(val_loader):
input = input.cuda(0, non_blocking=True)
target = target.cuda(0, non_blocking=True)
# input = input.cuda(0)
# target = target.cuda(0)
# compute output
output = model(input)
loss = criterion(output, target)
# measure accuracy and record loss
acc1, acc5 = accuracy(output, target, topk=(1, 2))
losses.update(loss.item(), input.size(0))
top1.update(acc1[0], input.size(0))
top5.update(acc5[0], input.size(0))
# measure elapsed time
batch_time.update(time.time() - end)
end = time.time()
progress.print(i)
# TODO: this should also be done with the ProgressMeter
print(' * Acc@1 {top1.avg:.3f} Acc@5 {top5.avg:.3f}'
.format(top1=top1, top5=top5))
return top1.avg
def save_checkpoint(state, is_best, filename='checkpoint.pth.tar'):
torch.save(state, filename)
if is_best:
shutil.copyfile(filename, 'model_best.pth.tar')
class AverageMeter(object):
"""Computes and stores the average and current value"""
def __init__(self, name, fmt=':f'):
self.name = name
self.fmt = fmt
self.reset()
def reset(self):
self.val = 0
self.avg = 0
self.sum = 0
self.count = 0
def update(self, val, n=1):
self.val = val
self.sum += val * n
self.count += n
self.avg = self.sum / self.count
def __str__(self):
fmtstr = '{name} {val' + self.fmt + '} ({avg' + self.fmt + '})'
return fmtstr.format(**self.__dict__)
class ProgressMeter(object):
def __init__(self, num_batches, *meters, prefix=""):
self.batch_fmtstr = self._get_batch_fmtstr(num_batches)
self.meters = meters
self.prefix = prefix
def print(self, batch):
entries = [self.prefix + self.batch_fmtstr.format(batch)]
entries += [str(meter) for meter in self.meters]
print('\t'.join(entries))
def _get_batch_fmtstr(self, num_batches):
num_digits = len(str(num_batches // 1))
fmt = '{:' + str(num_digits) + 'd}'
return '[' + fmt + '/' + fmt.format(num_batches) + ']'
def adjust_learning_rate(optimizer, epoch):
"""Sets the learning rate to the initial LR decayed by 10 every 30 epochs"""
lr = 0.001 * (0.1 ** (epoch // 30))
for param_group in optimizer.param_groups:
param_group['lr'] = lr
def accuracy(output, target, topk=(1,)):
"""Computes the accuracy over the k top predictions for the specified values of k"""
with torch.no_grad():
maxk = max(topk)
batch_size = target.size(0)
_, pred = output.topk(maxk, 1, True, True)
pred = pred.t()
correct = pred.eq(target.view(1, -1).expand_as(pred))
res = []
for k in topk:
correct_k = correct[:k].view(-1).float().sum(0, keepdim=True)
res.append(correct_k.mul_(100.0 / batch_size))
return res
data_dir = '/home/bgv/Desktop/cresus ia/data/train_test'
batch_size = 20
num_epochs = 50
n_classes = 4
h = 224
w = 224
input_size = h, w
# normalisation
normalisation = transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
# color
color_jitter_scb = transforms.ColorJitter(saturation = (0.2, 2), contrast= (0.2, 2), brightness = (0.3, 2))
color_jitter_s = transforms.ColorJitter(saturation = (0.2, 2))
color_jitter_c = transforms.ColorJitter(contrast= (0.2, 2))
color_jitter_b = transforms.ColorJitter(brightness = (0.3, 2))
color_jitter_random = transforms.RandomChoice([color_jitter_scb, color_jitter_s, color_jitter_c, color_jitter_b])
# rotation
rotation = transforms.RandomRotation(degrees = (-10,10), expand=False)
rotation_expand = transforms.RandomRotation(degrees = (-10,10), expand=True)
rotation_down = transforms.RandomRotation(degrees = (-179,-180), expand=False)
rotation_random = transforms.RandomChoice([rotation, rotation_expand])
original = transforms.Compose([transforms.Resize(input_size), transforms.ToTensor(), normalisation])
color_rotation = transforms.Compose([
rotation_random,
color_jitter_random,
transforms.Resize(input_size),
transforms.ToTensor(),
normalisation
])
# train
transforms_ = [original, color_rotation]
img_count = 0
load_train = img_loader('train', transforms_, batch_size)
load_test = img_loader('test', transforms_, batch_size, shuffle_data = False)
img_loaders = {'train': load_train, 'test': load_test}
main_worker(n_classes, data_dir, batch_size, load_train, load_test, n_workers = 20, evaluate = False, epochs = num_epochs) |
st179898 | The code looks generally alright.
Have you tried to reduce the number of workers? 20 seems to be quite high (of course it’s depending on your system setup).
Also, could you check if DistributedDataParallel 8 speeds up your training? |
st179899 | Thank you for you response.
I already tested with less number of workers, and to be sure I tested once again after your post. It takes more time with less workers (I set it to 10). My machine has 20 physical cores, it is NVIDIA DGX Station.
I tried:
torch.distributed.init_process_group(backend=“nccl”)
model = torch.nn.parallel.DistributedDataParallel(model)
but I got the following error:
ValueError: Error initializing torch.distributed using env:// rendezvous: environment variable RANK expected, but not set
If I understand it well, DataParallel 3 does the same stuff or it is wrong? |
st179900 | They do the same, but with DataParallel you drive N devices from a single process, whereas with DistributedDataParallel you drive N devices from N processes. For the latter, those devices may be located in different machines, hence the distributed part. You’ll have to launch those N processes though, you cannot start a single process and have it work out of the box. You can start multiple processes manually (and set the RANK environment variable accordingly, per the error message you’re seeing), or use the torch.distributed.launch 6 utility to launch processes for you. |
st179901 | Hi all,
I encounter a strange problem:
Previously, I define my collat_batch function to collect preprocessed data in numpy format, and set pin_memory = Ture; In the training loop, I iter a batch, then move the batch from numpy to gpu device; This way couldn’t make full use of cpu so that even GPU is waiting for data, cpu isn’t running fully.
Screen Shot 2019-05-22 at 1.46.30 PM.png958×391 144 KB
Then I change the above process, I define a collate_batch_torch function to collect preprocessed data in numpy format then convert it to torch.tensor, set pin_memory = True; In the training loop, I iter a batch, then move the batch from torch.tensor to device; This can make cpu running fully, but the step time is much slower; so it seems that this modification make cpu the bottelneck.
Screen Shot 2019-05-22 at 1.53.39 PM.png958×1056 383 KB
When I test it using one gpu or two gpus, the accelerate rate is linear. for example, using one gpu, time cost per iteration is 1s, two gpu is also 1s, but when I use 8 gpus, time could be 8s.
What’s wrong? |
st179902 | The bottom image shows a TON of system time= and high system load. At 8 processes you may be spawning a large multiple of data workers that end up overloading your machine. Make sure to tune this to the available cores in your system. |
st179903 | Several configuration I could think of:
Train and validate on all possible same GPUs (not able to set different batch_size for train/validate)
Train and validate on different GPUs (can set different batch_size)
Train on all GPUs and save the model per epoch, later run the model on validation data. (not able to use early stopping on validation loss)
What is the best practice?
Any other thoughts and suggestions will be appreciated. |
st179904 | All depends on your goals. If you want to maximize validation throughput, you’ll want to use as as many devices as you can. If you don’t care and want to keep your code simple, you can choose to use just one. |
st179905 | I am trying to do distributed training with PyTorch and encountered a problem.
This runtime error occurs during first backwards pass (initially error occurred
on model initialization).
File "/home/user/anaconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/home/user/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/home/user/anaconda3/lib/python3.7/site-packages/mpi4py/__main__.py", line 7, in <module>
main()
File "/home/user/anaconda3/lib/python3.7/site-packages/mpi4py/run.py", line 196, in main
run_command_line(args)
File "/home/user/anaconda3/lib/python3.7/site-packages/mpi4py/run.py", line 47, in run_command_line
run_path(sys.argv[0], run_name='__main__')
File "/home/user/anaconda3/lib/python3.7/runpy.py", line 263, in run_path
pkg_name=pkg_name, script_name=fname)
File "/home/user/anaconda3/lib/python3.7/runpy.py", line 96, in _run_module_code
mod_name, mod_spec, pkg_name, script_name)
File "/home/user/anaconda3/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "project/main.py", line 115, in <module>
trainer.run(config["epochs"])
File "/home/user/project/trainer/trainer.py", line 107, in run
self.run_epoch()
File "/home/user/project/trainer/trainer.py", line 70, in run_epoch
loss.backward()
File "/home/user/anaconda3/lib/python3.7/site-packages/torch/tensor.py", line 107, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/user/anaconda3/lib/python3.7/site-packages/torch/autograd/__init__.py", line 93, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: NCCL error in: /pytorch/torch/lib/c10d/ProcessGroupNCCL.cpp:272, unhandled system error
Error occurs always.
I use MPI for automatic rank assignment and NCCL as main back-end.
Initialization is done through file on a shared file system.
Each process uses 2 GPUs, processes run on different nodes.
Environment variable NCCL_SOCKET_IFNAME is set.
Does anyone know why this error may occur? Thanks in advance. |
st179906 | The NCCL errors can be notoriously cryptic. Can you reproduce the issue as well when you run 2 processes per machine and 4 in total (so you use just a single GPU per process)? |
st179907 | No, in case of one process per each gpu NCCL error doesn’t reproduce. But another problem arises: all processes freeze during DistributedDataParallel initialization.
model = DistributedDataParallel(
model,
device_ids=[device],
output_device=device,
) |
st179908 | You can set the environment variable NCCL_DEBUG=INFO to make it output logs.
Also see:
https://pytorch.org/docs/stable/distributed.html#other-nccl-environment-variables 87
https://docs.nvidia.com/deeplearning/sdk/nccl-developer-guide/docs/env.html 31 |
st179909 | According to this 6, ‘processes have separate memory’. But Pytorch can somehow share memory among several processes, according to this link 4: ‘Once the tensor/storage is moved to shared_memory (see share_memory_()), it will be possible to send it to other processes without making any copies.’ Why is it possible to share memory among separate memory? Doesn’t it sound like a paradox? |
st179910 | It uses shared memory. Multiple processes can map the same shared memory segment into their own private memory space. The same segment may have a different address in each process, but maps to the same underlying physical memory. Also see https://en.wikipedia.org/wiki/Shared_memory 5. |
st179911 | In my layer class, there is a value tensor, an index tensor, and a kernel tensor. In the forward function I use scatter_add to add the value to kernel according to the index. Then the kernel is used as the convolution kernel to perform convolution, my layer class looks like this:
class MyLayer(nn.Module):
def __init__(self, C_in, C_out):
super(MyLayer, self).__init__()
self.C_in = C_in
self.C_out = C_out
self.value = nn.Parameter(...)
self.register_buffer('inds', ...)
self.register_buffer('kernel', torch.zeros(self.C_in * self.C_out * 1 * 1))
def forward(self, x, p):
value = self.value * p
kernel = self.kernel.scatter_add(0, self.inds, value)
kernel = kernel.view(self.C_in, self.C_out, 1, 1)
out = F.conv2d(x, kernel, stride=1)
return out
However, when I wrap my network with nn.DataParallel and training on 2 GPUs, I observe a doubled forward time compared with single GPU. Could someone tell me why my layer becomes even slower with multi-GPU, and how to modify it to work with nn.DataParallel? |
st179912 | Nothing in the code you pasted looks particularly slow. Perhaps the larger model you’re using contains many small layers/kernels? The nn.DataParallel wrapper replicates a module to N devices and runs forward on each of them. This overhead can dominate the runtime if your model is very small or has many very small kernels. |
st179913 | Thanks for your advice. Yes, there are many small kernels in my network, actually I found it extremely slow when the batch size is small. Looks like with small batch size the runtime of scatter_add becomes the bottleneck (which can not be accelerated by nn.DataParallel), and when the batch size increases the nn.DataParallel begins to speed up training. |
st179914 | Hey,
Is there any easy way to accumulate gradients in a DistributedDataParallel model?
From what I see the only way to do this would be to copy gradients to a separate buffer before the next forward/backward?
Any plans on adding functionality for this to Pytorch? DataParallel gives too much overhead for me otherwise I would use that. |
st179915 | This was merged very recently in https://github.com/pytorch/pytorch/pull/21736 216. |
st179916 | Does anyone have any thoughts on the safest way to remove the DistributedDataParallel wrapper from a Module? Currently I’m just doing something like:
# Model in this case has already been wrapped in DDP
model = model.module
In the docs for DDP, it mentions hooks that are being registered in the module’s params:
when wrapping up your model with DistributedDataParallel, the constructor of
DistributedDataParallel will register the additional gradient
reduction functions on all the parameters of the model itself at the
time of construction
I take it those hooks are still there if I just grab the module attribute from the DDP instance right? |
st179917 | Based on a recent issue 30 opened for PyTorch, it is in fact the case currently (v1.1.0) that the module will retain the reduction functions and new ones will be added each time the model is wrapped in DDP |
st179918 | This has been fixed and will be available in PyTorch 1.2 (and is already available in the nightly builds). |
st179919 | From docs 1,
Constructor, forward method, and differentiation of the output (or a function of the output of this module) is a distributed synchronization point. Take that into account in case different processes might be executing different code.
I’m trying to print loss from each worker, and I’m getting the following output:
| distributed init (rank 2): tcp://localhost:1947
| distributed init (rank 3): tcp://localhost:1947
| distributed init (rank 0): tcp://localhost:1947
| distributed init (rank 1): tcp://localhost:1947
| initialized host gnode03 as rank 3
| initialized host gnode03 as rank 1
| initialized host gnode03 as rank 2
| initialized host gnode03 as rank 0
rank 0 loss 920.7410278320312
rank 1 loss 1102.2825927734375
rank 3 loss 765.515869140625
rank 2 loss 642.1211547851562
rank 2 loss 950.1659545898438
rank 1 loss 863.4507446289062
rank 3 loss 1053.586669921875
rank 0 loss 551.5623168945312
rank 0 loss 679.0967407226562
rank 2 loss 970.89892578125
rank 1 loss 1246.443359375
rank 3 loss 1169.9415283203125
rank 0 loss 798.79833984375
Does this mean I have to explicitly aggregate and average the total loss by total batch size? Or is this handled internally? The segment which prints the above looks like this:
self.model.train()
self._optimizer.zero_grad()
sample = move_to(sample, self.device)
loss, logging_outputs = self.model(sample)
loss.backward()
clip_grad_norm_(self._model.parameters(), args.max_grad_norm)
self._optimizer.step()
return loss.item() |
st179920 | Ah! This is indeed what you meant in DistributedDataParallel loss compute and backpropogation? 11. Pasting my answer there here as well for posterity and the indexers.
Each process computes its own output, using its own input, with its own activations, and computes its own loss. Then on loss.backward() all processes reduce their gradients. As loss.backward() returns, the gradients of your model parameters will be the same, and the optimizer in each process will perform the exact same update to the model parameters.
Note that this is only the case if you use torch.nn.parallel.DistributedDataParallel 2. If you don’t, you’ll need to take care of gradient synchronization yourself. |
st179921 | Hello!
I want to write a distributed program and run it on a cluster with several multi-GPU nodes which is managed using slurm.
The program should have one master process, which sends (equal to MPI_Send / MPI_Recv) different data to other processes and then collect the results (equal to MPI_Gather).
Could you please tell me if my task can be solved using torch.distributed? In the official docs (https://pytorch.org/docs/stable/distributed.html 9) I found only question marks for send/recv MPI operations for GPU.
I also tried Horovod but found no wrappers around send/recv functions. |
st179922 | Solved by pietern in post #2
The question marks mean that it depends whether or not your MPI distribution is compiled with CUDA support or not. If it is, send/recv of GPU tensors works. If it doesn’t, you’ll have to copy GPU tensors to CPU before you can pass them to send/recv. (see torch.Tensor.cpu). |
st179923 | The question marks mean that it depends whether or not your MPI distribution is compiled with CUDA support or not. If it is, send/recv of GPU tensors works. If it doesn’t, you’ll have to copy GPU tensors to CPU before you can pass them to send/recv. (see torch.Tensor.cpu 4). |
st179924 | What is the difference between FusedAdam optimizer in Nvidia AMP package with the Adam optimizer in Pytorch? |
st179925 | The Adam optimizer in Pytorch (like all Pytorch optimizers) carries out optimizer.step() by looping over parameters, and launching a series of kernels for each parameter. This can require hundreds of small launches that are mostly bound by CPU-side Python looping and kernel launch overhead, resulting in poor device utilization. Currently, the FusedAdam implementation in Apex flattens the parameters for the optimization step, then carries out the optimization step itself via a fused kernel that combines all the Adam operations. In this way, the loop over parameters as well as the internal series of Adam operations for each parameter are fused such that optimizer.step() requires only a few kernel launches.
The current implementation (in Apex master) is brittle and only works with Amp opt_level O2. I’ve got a WIP branch to make it work for any opt_level (https://github.com/NVIDIA/apex/pull/351 73). I recommend waiting until this is merged then trying it. |
st179926 | Just wondering, wouldn’t it be possible to use pytorch multiprocessing to parallelise the Adam loop? Or CUDA streams? |
st179927 | @sbsky Either technique comes with its own overhead. If the time it takes to launch one of these kernels is >> the time it takes to execute it, you’ll have to optimize the launch itself. If you decide to do this with multiprocessing, you’ll need to move the references to those tensors between processes, which isn’t free. The alternative is to launch fewer kernels, which is what @mcarilli did in AMP. |
st179928 | I am facing a very starnge problem with torch.nn.DataParallel(). I have a system with 8 GPUs and I want to use multiple GPUs for training my model. Now when I wrap the model with nn.DataParallel, it works only for batch_size 10! This is very odd because for any batch size other than 10 ( even smaller), the execution just gets stuck. When I am not using parallelism and running on single GPU, it is working properly. But for batch_size more than 16, cuda is running out of memory because my input vectors are very large and model is very big. So I am unable to take advantage of multiple GPUs. Any soltuion out there? Thank you in advance… |
st179929 | Batch size 10 is odd indeed. I would have expected it to only work with a multiple of 8, if you’re using 8 devices. What kind of model are you trying to parallelize? |
st179930 | If the batches are asymmetric in size then it is possible that some devices can handle 2 examples where others can’t. Not much that can be done about this, save for memory profiling to prove that this is what’s happening. |
st179931 | Hypothetically, if I have 2 GPUs in node 0 and 3 GPUs in node 1, how would I configure it to support that? All the examples in the documentation as well as example codes perform word_size = gpus_per_node * args.world_size, which assumes from gpus_per_node that there is an equivalent amount of GPUs per node. |
st179932 | That expectation is built into the torch.distributed.launch utility but not elsewhere. You can start 5 processes (1 per GPU) and use world_size=5 where you have 2 processes on one machine and 3 processes on the other machine. It’s not very common to have this situation, so I’m not surprised most of the examples you see assume a symmetric contribution across machines. That said, you can still make it work, but will have to adapt those examples or start from scratch with torch.nn.parallel.DistributedDataParallel. |
st179933 | I didn’t learn basic knowledge of computer before, can’t understand communication, concurrent, multiprocessing, what do I need to learn for understanding DDP? |
st179934 | You can start with the tutorial. I suggest you study and research the concepts you don’t know thoroughly, if that’s what your goal is. The documentation for Python multiprocessing 3 is very thorough, for starters. |
st179935 | In the DDP tutorial, the author user 'multi-node ’ In one computer, I don’t understand why using it, I want to know when should I use multi-node? |
st179936 | Could you post a link to the tutorial?
A node usually refers to a host, so that I’m not sure what multi-node on a single computer means. |
st179937 | https://pytorch.org/tutorials/intermediate/ddp_tutorial.html 20
I guess that the author just want to show how to use multi-node, and let us generize to another enviroment? |
st179938 | You can replace it with “multi-process” and it will still be valid. It’s common to use a single PyTorch process per GPU device in your system. Running 8 processes across 8 machines won’t be different from running 8 processes on a single machine (provided it has 8 GPUs), except for performance. |
st179939 | Hi,
I was trying to use streams to speed up calling multiple Conv2d modules on the same GPU.
My code is below.
It doesn’t appear to run any quicker . There was a previous question asked last year about using streams. There was a suggestion that all ops should be run on non-default streams. I tried to accomplish this but my attempts don’t seem to have helped.
Is there any obvious problem with my approach?
Thanks
import torch
import torch.nn as nn
class ParallelDilatedConv(nn.Module):
def __init__(self, num_dilations, num_streams):
super(ParallelDilatedConv, self).__init__()
self.m = num_dilations
self.streams = [torch.cuda.Stream() for i in range(num_streams)]
self.module = nn.ModuleList([nn.Conv2d(1, 1, (3, 3), dilation=2**i, padding=2**i) for i in range(num_dilations)])
def forward(self, input):
res = []
for i in range(self.m):
with torch.cuda.stream(self.streams[i%len(self.streams)]):
res.append(self.module[i](input))
return torch.cat(res)
class DilatedConv(nn.Module):
def __init__(self, num_dilations):
super(DilatedConv, self).__init__()
self.m = num_dilations
self.module = nn.ModuleList([nn.Conv2d(1, 1, (3, 3), dilation=2**i, padding=2**i) for i in range(self.m)])
def forward(self, input):
res = []
for i in range(self.m):
res.append(self.module[i](input))
return torch.cat(res)
def time_loop(mod, num_iter, outstr):
start = time.time()
for i in range(num_iter):
mod(im).cpu()
end = time.time()
print(outstr.format(end-start))
if __name__ == '__main__':
import time
num_iter = 10
num_conv = 6
device = 'cuda:0'
num_streams = 6
s = torch.cuda.Stream()
with torch.cuda.stream(s):
im = torch.rand(1000, 1, 200, 200).to(device, non_blocking=True)
mod = DilatedConv(num_conv).to(device, non_blocking=True).share_memory()
time_loop(mod, num_iter, 'Sequential took {}')
mod = ParallelDilatedConv(num_conv, num_streams).to(device, non_blocking=True).share_memory()
time_loop(mod, num_iter, 'Parallel took {}') |
st179940 | It’s likely that the convs you’re launching are big enough to occupy the entire GPU. When you launch a number of big kernels on different streams, they end up being executed sequentially. Smaller kernels, that use only a small slice of GPU resources, can be parallelized by using multiple streams. |
st179941 | I’ve been having a lot of problems with DataParallel. I’ve tried the simplest possible version of DataParallel I can think of, and it still errors out. Any help or advice would be greatly appreciated! This is running on a server with two P100 GPUs.
In [4]: mlp = nn.DataParallel(nn.Linear(100, 200))
In [5]: mlp(torch.zeros((32, 100)))
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
~/code/projects/mve_sac/core.py in <module>
----> 1 mlp(torch.zeros((32, 100)))
/opt/conda/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
491 result = self._slow_forward(*input, **kwargs)
492 else:
--> 493 result = self.forward(*input, **kwargs)
494 for hook in self._forward_hooks.values():
495 hook_result = hook(self, input, result)
/opt/conda/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
144 raise RuntimeError("module must have its parameters and buffers "
145 "on device {} (device_ids[0]) but found one of "
--> 146 "them on device: {}".format(self.src_device_obj, t.device))
147
148 inputs, kwargs = self.scatter(inputs, kwargs, self.device_ids)
RuntimeError: module must have its parameters and buffers on device cuda:0 (device_ids[0]) but found one of them on device: cpu |
st179942 | You need to move the parameters of nn.Linear to cuda. As the error message says, they are currently on cpu.
linear = nn.Linear(100, 200).cuda()
mlp = nn.DataParallel(linear)
mlp(torch.zeros((32, 100))) |
st179943 | Hi. I have a model who’s forward method performs a shallow copy of the tensors into a dictionary before returning like so -
def forward(self, input):
block0 = self.block0(input)
block1 = self.block1(block0)
self.end_points = {}
self.end_points['block0'] = (block0, 0)
self.end_points['block1'] = (block1, 1)
return block0
Where self.block0 and self.block1 are nn.Conv2d layers followed by batch norm and leaky relu
If now I do -
output = model(input)
loss = output.mean()
loss.backward()
print (model.block0.conv.bias.grad) #block0 is an nn.Module with contains a class attribute conv which is nn.Conv2d
The grad value is None. There is a similar outcome if I return just return self.end_points dict.
One the other hand with the following forward function -
def forward(self, input):
block0 = self.block0(input)
block1 = self.block1(block0)
self.end_points = {}
self.end_points['block0'] = (block0, 0)
return block0
self.end_points['block1'] = (block1, 1)
The grad attribute of model.block0 gets accumulated with the correct gradient.
I have this problem when I wrap the module in nn.DataParallel only. I’m using the following workaround since I have some custom functions.
class MyDataParallel(torch.nn.DataParallel):
"""
Allow nn.DataParallel to call model's attributes.
"""
def __getattr__(self, name):
try:
return super().__getattr__(name)
except AttributeError:
return getattr(self.module, name)
I am not able to understand why this is the case. Please help! Thank you in advance. |
st179944 | Could you post the definition of self.block0 and self.block1?
Based on your description, I would assume you’ve defined them as nn.Sequential modules, but then you would get an error calling model.block0.grad. |
st179945 | My apologies. I did not mention earlier that I encounter this problem only when I wrap my module inside nn.DataParallel. I have updated the description above.
As to your question both of those blocks are nn.Modules which have an instance of nn.Conv2d, nn.LeakyRelu and nn.BatchNorm2d which are called on the input in the forward method in that order. |
st179946 | The problem is fixed by not setting end_points as a class variable and returning the entire dict. I suspect the problem is along the lines of https://github.com/pytorch/pytorch/issues/16532 5. Not sure though where the tensor is going out of scope and triggering a recursive deletion of the rest of the graph though. |
st179947 | Dealing with varying input size, I catch OOM exceptions during training (in my setting roughly 1 in few hundred minibatches). Due to domain specific reasons, I prefer not to crop/resize inputs to a constant size. Also, there is not a clear way to know in advance which input sizes will cause an OOM.
This is generally fine by me as long as I can recover from the OOM events and continue training.
If I detect an OOM event, I’m “cleaning up” using torch.cuda.empty_cache(), zero gradients, and then continue training as usual. This works great in a non-distributed setup, but creates problems in a distributed setting.
note - I am following the suggested way to deal with OOM as mentioned here:
Is it safe to recover from CUDA OOM?
I’m looking at trying to improve the robustness of our trainer code. If I select batch parameters that are not tight enough, I may run out of memory. Is it safe to catch the OOM, reduce the batch size, and try again?
Thanks
Jerry
To deal with OOM in a distributed setting, I do something like this:
if problems_occured:
success_tens = torch.ones(0)
else:
success_tens = torch.ones(1)
dist.all_reduce(success_tens, op=dist.reduce_op.SUM) ###error happens here
and then, only if success_tens reached the size of the world_size, I do an additional all_reduce over the gradients to sum them.
This is to make sure that all workers succeeded in calculating their own gradient before combining the gradient.
However, after I catch the OOM event in the worker that caught this OOM event I get the following error:
miniconda3/envs/py36torch/lib/python3.6/site-packages/torch/distributed/distributed_c10d.py", line 838, in all_reduce
work.wait()
RuntimeError: [enforce fail at /opt/conda/conda-bld/pytorch_1544174967633/work/third_party/gloo/gloo/allreduce.cc:29] opts.elements > 0
note: as can be seen in the error message - I’m currently using gloo as the distributed backend.
Any suggestions on how to solve this are very welcome
Note - while I’m currently using a simple syncronized distributed gradients calculation, I’m open to any suggestions, as long as they help survive occasional (relatively rare) OOM events. |
st179948 | @ptrblck any idea? the tl;dr is that I can survive OOM events in a none distributed setting (just discarding this training minibatch, cleaning up memory and continuing), but using distributed setting I can’t.
This is important for me as I’m in medical imaging setting in which resolution is important, and cropping is too destructive.
Any suggestions and/or pointing me to relevant people if necessary is very welcome |
st179949 | Sorry, I’m not really familiar with distributed training and gloo, so I can’t give any useful input to fix this issue.
However, have you thought about other approaches to avoid the OOM issues?
torch.utils.checkpoint 9 might be worth a try (although I’m not sure how it behaves in a distributed setup) or NVIDIA’s apex - mixed precision training 6.
Let me know, this would be an option. |
st179950 | Thanks! I’m already using both checkpointing and mixed precision, which helped to make the OOM events pretty rare, but they still exist here and there. Perhaps it’s reasonable to consider this “a bug” or feature request and just report it on the github channel. |
st179951 | Hi @yoelshoshan! The error message indicates that the tensor that you’re passing to allreduce is empty. Is it possible that the “success_tens” itself is somehow empty? Not sure this is possible, but since you’re already dealing with an OOM… |
st179952 | Hi! first of all thanks for trying to assist
success_tens is not empty.
I managed to build NCCL and when using it as the backend for the distributed functions, this issue does not happen, so I believe that this is gloo specific problem. |
st179953 | OK, I just realized what’s going on here. I misunderstood the code snippet you list in the original post. If you see an OOM, you create an empty tensor. This is why the error triggers. Instead of torch.ones(0), you’ll want to use torch.zeros(1). |
st179954 | Like you suggested, using torch.tensor(0.0) or torch.tensor(1.0) does not trigger that issue.
Thanks for the help! <3 |
st179955 | Hi I’m using DistributedDataParallel to run my model across multi-GPU with sync BN. However, my script uses relative imports and is supposed to be run with -m option. How can I do this when launching it via torch.distributed.launch?
Example (does not work, but I’d like to do this):
python -m torch.distributed.launch --nproc_per_node 2 -m detector.train --arg1 --arg2
Thanks |
st179956 | Solved by LeviViana in post #4
You can create a copy of this file and customize it the way you want. Here this module will spawn parallel processes according to their rank. You could arrange your script so that the cmd could look like cmd = python -m detector.script --local_rank --arg1 --arg2 .... |
st179957 | Thanks for your reply but I think you misunderstood. The issue is not running torch.distributed.launch with -m option. The problem is that my script uses relative imports and it is supposed to be run with -m option. I reckon that when torch.distributed.launch spawns the script it uses the more natural approach python detector/script.py, whereas I’d like it to call like python -m detector.script |
st179958 | You can create a copy of this file 24 and customize it the way you want. Here 18 this module will spawn parallel processes according to their rank. You could arrange your script so that the cmd could look like cmd = python -m detector.script --local_rank --arg1 --arg2 .... |
st179959 | It is unfortunate that I have to make a copy and alter it but I guess it works! Thanks a lot =] |
st179960 | Does it make sense to use Ray instead of torch.multiprocessing, if used only on a single computer?
Has anyone used it for multiple clusters?
What are the advantages/disadvantages?
Anything to be cautious of?
Thank you |
st179961 | I am facing a very strange issue. I am working with cuda 9.0 version of PyTorch and yesterday it was working properly in my system ( ubuntu 14.04). Today when I tried to run my model, I noticed that it is using CPU. When I looked into it, I found that torch.cuda.is_available() is returning false even though the cuda driver is available. How is it possible? It was working properly till yesterday. How can it change syddenly? |
st179962 | Did you update and NVIDIA drivers etc.?
Could you try to restart your machine and check, if it’s working again?
I had similar issues after Ubuntu updated some drivers. |
st179963 | ptrblck:
if it’s working again?
I was working on a remote machine. It will take some time to reboot and check. I will inform after checking. NVIDIA drivers are up to date |
st179964 | We sometimes reuse existent models to build loss module like perceptual loss and GAN’s adversarial loss. I would like to know that if the existent models are accelerated with nn.DataParallel, it is inefficient to use nn.DataParallel one more time for the loss Module which use the existent models.
For example, model1 and model2 compose cycle loss module as follows. In that case, is the code, “cycle_loss = nn.DataParallel(CycleLoss(model1, model2)).cuda()”, inefficient?
import torch
import torch.nn as nn
import torch.nn.functional as F
class Model(nn.Module):
def __init__(self):
super().__init__()
self.conv = nn.Conv2d(1,1,3,1,1)
def forward(input):
return F.relu(self.conv(input))
class CycleLoss(nn.Module):
def __init__(self, model1, model2):
super().__init__()
self.model1 = model1
self.model2 = model2
self.l1 = nn.L1Loss()
def forward(input1, input2):
loss = self.l1(self.model2(self.model1(input1)), input1)
loss += self.l1(self.model1(self.model2(input2)), input2)
return loss[None] # expand dim=0 to concatnate
# make model
model1 = nn.DataParallel(Model()).cuda()
model2 = nn.DataParallel(Model()).cuda()
# loss module
cycle_loss = nn.DataParallel(CycleLoss(model1, model2)).cuda()
# opt
opt = torch.optim.Adam(list(model1.parameters()) + list(model2.parameters()))
# train
for input1, input2 in data_loader:
opt.zero_grad()
loss = cycle_loss(input1, input2)
loss = loss.mean()
loss.backward()
opt.step()
I am afraid that nesting nn.DataParallel makes the code perform scatter and gather at each sub-module needlessly.
Thank you. |
st179965 | Yes, this is not great. The outer nn.DataParallel module will replicate N times. The inner modules will also be replicated N times. I think you’ll end up with N^2 replicas instead of just N. You can add some logging to the forward functions to confirm what really ends up happening. |
st179966 | I have two 4x2080ti machines. I want to train my model by NCCL distributed backend. But the training is slow because these two machines are connected by a 1000M ethernet card.
So I want to use two infiniband cards to connect these two machines.
But my GPU is a GeForce not a Tesla. The question is, can infiniband accelerate the training if the GPU don’t support GPUDirect?
Thanks. |
st179967 | Solved by pietern in post #2
In theory, yes. As long as you get cards with a higher bandwidth than your Ethernet setup it should result in an improvement. But since NCCL is built for using GPUDirect, I’m not sure if it will work with NCCL out of the box. If it doesn’t, you could try and experiment with IPoIB and fall back to us… |