id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st100200 | Hi all,
I have a tuple like this:
(tensor([[ 5.0027e-03, 1.3885e-03, -6.4720e-03, ..., 2.1762e-03,
2.0357e-03, 1.0070e-03],
[ 9.5693e-04, 7.5463e-04, -7.4230e-04, ..., -1.4247e-03,
-1.5754e-03, 2.6448e-03],
[ 7.9327e-03, 3.3485e-03, -9.9604e-04, ..., -4.5044e-03,
8.2048e-03, 4.0572e-03],
...,
[-2.9638e-03, -2.4065e-03, 2.5752e-03, ..., -3.0519e-06,
-4.8047e-03, 8.5964e-03],
[-3.7381e-03, -2.3508e-03, 5.2545e-03, ..., -9.0683e-03,
-1.0091e-02, 5.3170e-03],
[ 3.2105e-03, -3.0882e-03, -1.7957e-03, ..., -1.8002e-03,
-1.5211e-03, 4.3075e-03]]),)
Which was generated by torch.autograd.grad(). I was wondering how can I calculate the l2-norm of such a tuple?
Thanks! |
st100201 | It looks like your tuple has only the first element set.
You could try torch.norm(x[0], 2). |
st100202 | Hello,
Is it correct to save the logits and probabilities of best model like the following ?
for epoch in range(args.start_epoch, args.epochs):
if args.distributed:
train_sampler.set_epoch(epoch)
adjust_learning_rate(optimizer, epoch) # train for one epoch
prec_train, loss_train = train(train_loader, model, criterion, optimizer, epoch)
# evaluate on validation set
prec1, loss_val,my_logits,own_proba = validate(val_loader, model, criterion)
# remember best prec@1 and save checkpoint
is_best = prec1 > best_prec1
best_prec1 = max(prec1, best_prec1)
save_checkpoint({
'epoch': epoch + 1,
'arch': args.arch,
'state_dict': model.state_dict(),
'best_prec1': best_prec1,
'optimizer': optimizer.state_dict(),
'logits' : my_logits,
'proba': own_proba, }, is_best)
Thank you |
st100203 | Hi,
I’m new to transfer learning and I got two questions about inceptionV3.
I’m following the pytorch transfer learning tutorial and I’m gonna do ‘ConvNet as fixed feature extractor’ (https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html 17). So I should freeze all the layers except the FINAL fc layer. But what about the fc layer in auxillary classifier? Am I supposed to unfreeze this fc layer?
According to my understanding, if we don’t set the model to evaluation mode (model.eval()), there will be two outputs: one from the auxillary fc layer and the other from the final fc layer. So we have to set the model to eval mode when testing the model. Is this correct?
Thanks a lot!! |
st100204 | You can disable the auxiliary output by setting aux_logits=False.
I would start the fine tuning by disabling it and just re-train the final classifier. |
st100205 | Hi,
Thanks for your reply. But why wouldn’t you retrain the auxiliary classifier and the final classifier together? |
st100206 | The aux_logits are created a bit deeper in the model (line of code 11), so that it would just make sense to use them, if your fine tuning goes further down the model (below self.Mixed_6e(x)). Maybe I misunderstood your first post, but I thought you just wanted to re-create the last linear layer.
If you want to fine tune the whole model of just beyond the aux output, you could of course re-use the aux_logits. I’m not sure if that’s the usual approach, but definitely worth a try!
In that case you would also have to re-create the final linear layer in InceptionAux 11. |
st100207 | I’m sorry maybe i did not explain it clearly. What I wanted to do is to re-create both the last fc layer and the fc layer within the auxiliary classifier, then just re-train the two layers. Therefore, for each epoch during training, we’ll have two outputs (one for auxiliary and one for the final fc layer) and two losses, loss_1 and loss_2, then do backprop by (loss_1 + loss_2).backward() .
Do you think it’ll work…? |
st100208 | The approach would work (summing the losses and backpropagate through the sum), but it’s probably not necessary, if you don’t want to finetune below the auxiliary classifier.
Assuming that all layers are frozen (do not require gradients) except the last linear layer, the auxiliary loss would just vanish as it’s not needed.
In the original paper the aux loss was used to “enhance” the gradients at this point. I.e. the loss from the model output would be used to calculate the gradients for all layers. Since the model is quite deep, the gradients tend to vanish in the first layers. Therefore an auxiliary was used to create another gradient signal and to sum it to the “output signal” to have valid gradients up to the first layer.
In your case, if the layers are frozen, the aux loss won’t do anything besides being calculated.
Correct me, if I misunderstood your use case. |
st100209 | Hi,
I am trying to use torch.mul() for a sparse tensor, but unfortunately the code below throws a run-time error:
a = torch.tensor([0.92], requires_grad=True)
i = torch.LongTensor([[0,0,0,0], [0,0,1,1], [0,0,2,2], [0,0,3,3]])
v = torch.FloatTensor([1, 2, 1, 1])
# A 4D sparse tensor
t = torch.sparse.FloatTensor(i.t(), v, torch.Size([2,2,5,5]))
# multiplication
y = torch.mul(a,t)
Error:
Traceback (most recent call last):
File "test3.py", line 59, in <module>
y = torch.mul(a,t)
RuntimeError: sparse tensors do not have strides
Is sparse matrix multiplication not supported at the moment?
Thanks |
st100210 | According to #8853 227, mul/mul_ for (Sparse, Sparse) -> Sparse and (Sparse, Scalar) -> Sparse exist, but (Sparse, Dense) is still on todo list. |
st100211 | I hava a tensor like this x = torch.Tensor([[1, 2, 3, 0], [4, 5, 0, 0], [6, 7, 0, 0]]). How can I get tensor y = softmax(x, dim=1), like this y = torch.Tensor([[a, b, c, 0], [d, e, 0, 0], [f, g, 0, 0]]) ? I really appreciate it. |
st100212 | Solved by handesy in post #2
You may want to do a masked softmax, e.g., https://github.com/allenai/allennlp/blob/master/allennlp/nn/util.py#L216 |
st100213 | You may want to do a masked softmax, e.g., https://github.com/allenai/allennlp/blob/master/allennlp/nn/util.py#L216 366 |
st100214 | I want to use cifar 10 dataset from torchvision.datasets to do classification.
import torch
import torchvision
import torchvision.transforms as transforms
from torch.utils.data.sampler import SubsetRandomSampler
transform = transforms.Compose(
[transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))])
trainset = torchvision.datasets.CIFAR10(root='./data', train=True,
download=True, transform=transform)
the detail in trainset is :
type(trainset[1]) # tuple
type(trainset[1][0]) # torch.Tensor (image (3,32,32))
type(trainset[1][1]) # int (label 0~9)
and then I would like to split train data to training data and validation data like 5-fold
how can I split dataset by labels (not split randomly) ?
i.e. 50000 image data, 10 labels, each label has 5000 images
cross validation (5-fold): 40000 training data, each label has 4000 images
10000 validation data, each label has 1000 images
Thanks in advance. |
st100215 | I made a custom dataset with dataloader, which was about 3 different categories images. I used vgg16 to predict which category was in the image.
If i want to predict a single image however, i would get back something like this:
tensor([[-0.0559, -1.6212, -0.3467]], grad_fn=)
How would I know which of the categories correspons with index 0 or index 1?
I’ve seen other problems where do something like this:
net = torch.load(‘pytorch_Network2.h5’)
idx_to_class = {
0: ‘airplane’,
1: ‘automobile’,
2: ‘bird’,
}
output = net(img)
pred = torch.argmax(output, 1)
for p in pred:
cls = idx_to_class[p.item()]
print(cls)
But then, how would you know index 0 is a plane? Does that have to do with how you build your dataset? in that case, This is how i’ve built it:
class SophistyDataSet(Dataset):
"""Dataset wrapping images and target labels for 3 categories 'Modern','Vintage','Classic'
Arguments:
A CSV file path
Path to image folder
Extension of images
PIL transforms
"""
def __init__(self, directory_list, img_path):
raw_images_list = self.prepare_dataset(directory_list)
images_df = pd.DataFrame(raw_images_list, columns=['name', 'tag'])
self.mlb = MultiLabelBinarizer()
self.transform = transforms.Compose([transforms.Resize((IMG_SIZE, IMG_SIZE)), transforms.ToTensor()])
self.img_path = img_path
self.X_train = images_df['name']
self.y_train = self.mlb.fit_transform(images_df['tag'].str.split()).astype(np.float32)
def __getitem__(self, index):
img = Image.open(
self.img_path + self.determine_imagename(self.X_train[index]) + '/' + self.X_train[index]).convert('RGB')
if self.transform is not None:
img = self.transform(img)
label = self.y_train[index]
return img, label
def __len__(self):
return len(self.X_train.index)``` |
st100216 | rmnvcc:
self.y_train = self.mlb.fit_transform(images_df[‘tag’].str.split()).astype(np.float32)
You need to keep track of this mapping.
Best regards
Thomas |
st100217 | Hi,
I am trying to implement the power mean in pytorch following this paper 34.
It is pretty straight forward to do it in numpy:
def gen_mean(vals, p):
p = float(p)
return np.power(
np.mean(
np.power(
np.array(vals, dtype=complex),
p),
axis=0),
1 / p
)
Or in tensorflow:
def p_mean(values, p, n_toks):
n_toks = tf.cast(tf.maximum(tf.constant(1.0), n_toks), tf.complex64)
p_tf = tf.constant(float(p), dtype=tf.complex64)
values = tf.cast(values, dtype=tf.complex64)
res = tf.pow(
tf.reduce_sum(
tf.pow(values, p_tf),
axis=1,
keepdims=False
) / n_toks,
1.0 / p_tf
)
return tf.real(res)
Source 4
However since pytorch do not allow complex number this seems really not trivial.
An example of a limitation is the geometric mean for negative numbers that do not seems possible in pytorch.
Am I missing something? |
st100218 | Solved by tom in post #6
Well, -(378**(1/3))is a root, too, so I’d start with that.
If not, you can precompute -1**(1/3) to (0.5+0.866j) and “outer” multiply 378**(1/3) (or whatever outcome you have by a two-tensor tensor([0.5, 0.866]) if it is negative and tensor([1.0, 0]) if the mean is positive.
Best regards
Thomas |
st100219 | You could mock a complex number using two floats (one for magnitude and one for phase) this way the pow operation becomes easy but the mean operation is not that simple. Splitting the number by it’s real and imaginary part would make the mean easy but the pow would become hard.
What I would recommend is to use both representations (the most suitable for each operation) and transform them into each other. This would maybe not be that efficient, but it should work. |
st100220 | Given that you only have two phases (0 and pi) in your input, if p is fixed you can easily precompute that and just multiply the sign with the result. Then you multiply with the p-norm divided by the vector size to 1/p (or absorb that factor into the phase).
Best regards
Thomas
Edited P.S.: Looking at the paper authors’ implementation, they use min, max (limits as p approaches -/+inf, mean and 3rd-power mean, so it seems simple enough and no complex numbers involved (if you take power of 1/3 to be the inverse of power of 3). |
st100221 | Thanks @tom I am unsure to follow you. It is true that the paper only suggest using max/min/and odd power mean from 1 to 10.
However this does not solve the issues of having negative numbers as far I understand.
i.e. Taking the 3rd power mean of -9,-3 ( gen_mean([-9, -3],3) try to do np.power(-378,1/3) which solution is complex).
Would you mind elaborating?
Best, |
st100222 | Well, -(378**(1/3))is a root, too, so I’d start with that.
If not, you can precompute -1**(1/3) to (0.5+0.866j) and “outer” multiply 378**(1/3) (or whatever outcome you have by a two-tensor tensor([0.5, 0.866]) if it is negative and tensor([1.0, 0]) if the mean is positive.
Best regards
Thomas |
st100223 | Thanks @tom I think you’re right this solution will just do it.
Here is what it looks like by the way:
def p_mean_pytorch(tensor,power):
mean_tensor = tensor.pow(power).mean(0)
return ( (mean_tensor * mean_tensor.sign() ).pow(1/power) * mean_pow.sign() )
i.e.
p_mean_pytorch(torch.tensor([[-100,33,99],[39,9,-10000],[1,3,4],[0,0,0]]).float(), 3)
tensor([ -61.7249, 20.9335, -6299.6050])
I would have liked to get the same root resolution as their implementation of the paper but I guess this will do . I will check the precomputing part later, the only challenging part is telling torch.ger to be conditional but I think I can do something with a mask.
def power_mean_precompute_3(tensor):
magical_number=torch.tensor([0.5,0.866]) # np.power(-1+0j,1/3)
mean_tensor = tensor.pow(3).mean(0)
mask = torch.le(mean_tensor,0.)
le_result = torch.ger((mean_tensor * mask.float() * mean_tensor.sign()).pow(1/3),magical_number)
ge_result = torch.ger((mean_tensor * (~mask).float()).pow(1/3),torch.tensor([1.,0]))
return le_result+ge_result
power_mean_precompute_3(torch.tensor([[-100,33,99],[39,9,-10000],[1,3,4],[0,0,0]]).float())
tensor([[ 30.8625, 53.4538],
[ 20.9335, 0.0000],
[3149.8025, 5455.4580]])
gen_mean(torch.tensor([[-100,33,99],[39,9,-10000],[1,3,4],[0,0,0]]).numpy(),3)
array([ 30.86246739 +53.45536157j, 20.93346287 +0. j,
3149.80160592+5455.61641521j])
It seems to be doing what intended but it should get a bit optimized.
Also, this won’t work for a batch as torch.ger won’t allow it but to adapt it shouldn’t be too hard. |
st100224 | I usually recommend broadcasting for outer products. Here you can combine with advanced indexing (.long() makes it an index instead of a mask):
# seq x batch
a = torch.tensor([[-100.,33,99],[39,9,-10000],[1,3,4],[0,0,0]])
magical_number=torch.tensor([[0.5,(0.75)**(0.5)],[1,0]]) # np.power(-1+0j,1/3), 1 ; keep out of function if you want precomputed..
mean_tensor = a.pow(3).mean(0)
magical_number[(mean_tensor>0).long()]*mean_tensor.abs().pow(1/3)[:,None]
It still is much slower than numpy, but it might not be too the overall bottleneck.
Best regards
Thomas
P.S.: Pro tip: Don’t do torch.tensor(...).float() or .cuda() or so, but always use torch.tensor(..., dtype=torch.float, device=...). It’s more efficient and once you add requires_grad, the latter gives you a leaf variable while the former does not. |
st100225 | Hi,
I noticed “Empty bags (i.e., having 0-length) will have returned vectors filled by zeros.” in the pytorch document.
image.png1668×422 47.5 KB
you can see it in the picture, where offset1 is [0,1,2,2,2] which represents five bags, the third and fourth bag should be empty, but i got nan instead of zeros, why does this happen? |
st100226 | Hey,
I have a network which overrides the parameters() function to only include trainable parameters. This has worked well until I tried to run it with DataParallel. I guess I was not supposed to override it because DataParallel does not work with my model. Here’s an example:
# Python 3.6
# Pytorch 4.1 installed via anaconda
import torch
from torch import nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.module_list = nn.ModuleList()
self.module_list.append(nn.Linear(10, 10))
self.module_list.append(nn.Linear(10, 10))
def parameters(self, only_trainable=True):
for param in self.module_list.parameters():
if only_trainable and not param.requires_grad:
continue
yield param
net = nn.DataParallel(Net().cuda())
net(torch.rand(1, 10))
This throws a NotImplementedError. If i set requires_grad=False on a module I instead get a KeyError in torch/nn/parallel/replicate.py
The solution is easy, just rename the function to something like trainable_parameters().
However, I’m a bit curious, should parameters() never be overridden? It worked perfectly fine when running on single GPU but I guess the function is used internally in some other parts of Pytorch? Or did I just not use yield properly?
Thanks in advance |
st100227 | I’m not completely sure, but I think your parameters method filtering only parameters requiring gradients will break replicate 47 as your current method won’t yield all parameters. |
st100228 | I thought so, as I kept getting a KeyError in replicate.py when I had frozen a layer. What I don’t get is why it is throwing a NotImplementedError in module.py 4 when I return all parameters (i.e. all parameters require gradients). That’s why I thought I was yielding parameters incorrectly somehow. |
st100229 | With this class
# Python 3.6
# Pytorch 4.1 installed via anaconda
import torch
from torch import nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.module_list = nn.ModuleList()
self.module_list.append(nn.Linear(10, 10))
self.module_list.append(nn.Linear(10, 10))
def parameters(self, only_trainable=True):
for param in self.module_list.parameters():
if only_trainable and not param.requires_grad:
continue
yield param
Doing this
net = nn.DataParallel(Net().cuda())
net(torch.rand(1, 10))
Throws this error
---------------------------------------------------------------------------
NotImplementedError Traceback (most recent call last)
<ipython-input-1-7bea0d51ab29> in <module>()
18
19 net = nn.DataParallel(Net().cuda())
---> 20 net(torch.rand(1, 10))
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
121 return self.module(*inputs[0], **kwargs[0])
122 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
--> 123 outputs = self.parallel_apply(replicas, inputs, kwargs)
124 return self.gather(outputs, self.output_device)
125
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in parallel_apply(self, replicas, inputs, kwargs)
131
132 def parallel_apply(self, replicas, inputs, kwargs):
--> 133 return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)])
134
135 def gather(self, outputs, output_device):
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py in parallel_apply(modules, inputs, kwargs_tup, devices)
75 output = results[i]
76 if isinstance(output, Exception):
---> 77 raise output
78 outputs.append(output)
79 return outputs
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/parallel_apply.py in _worker(i, module, input, kwargs, device)
51 if not isinstance(input, (list, tuple)):
52 input = (input,)
---> 53 output = module(*input, **kwargs)
54 with lock:
55 results[i] = output
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in forward(self, *input)
81 registered hooks while the latter silently ignores them.
82 """
---> 83 raise NotImplementedError
84
85 def register_buffer(self, name, tensor):
NotImplementedError:
And doing this
net = Net().cuda()
# Freeze first layers parameters, i.e. only second layer is trainable
for param in net.module_list[0].parameters():
param.requires_grad = False
net = nn.DataParallel(net)
net(torch.rand(1, 10))
Throws this error
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-1-8d9b7061f99f> in <module>()
21 param.requires_grad = False
22 net = nn.DataParallel(net)
---> 23 net(torch.rand(1, 10))
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
120 if len(self.device_ids) == 1:
121 return self.module(*inputs[0], **kwargs[0])
--> 122 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
123 outputs = self.parallel_apply(replicas, inputs, kwargs)
124 return self.gather(outputs, self.output_device)
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in replicate(self, module, device_ids)
125
126 def replicate(self, module, device_ids):
--> 127 return replicate(module, device_ids)
128
129 def scatter(self, inputs, kwargs, device_ids):
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py in replicate(network, devices, detach)
50 replica._parameters[key] = None
51 else:
---> 52 param_idx = param_indices[param]
53 for j in range(num_replicas):
54 replica = module_copies[j][i]
KeyError: Parameter containing:
tensor([[-0.3002, 0.2907, 0.1129, 0.2012, 0.3133, -0.1077, 0.0199, -0.0915,
-0.1875, -0.0787],
[-0.1535, -0.0093, -0.1195, -0.2870, 0.2770, 0.2447, 0.1371, 0.2554,
-0.2400, 0.0050],
[ 0.1053, -0.0462, -0.2816, -0.2469, -0.2198, 0.1078, 0.1210, -0.2257,
0.2912, 0.0348],
[-0.2850, -0.2684, 0.1115, 0.1451, 0.3048, -0.1432, -0.0334, -0.0985,
0.0428, -0.1384],
[-0.2661, 0.3154, 0.0290, -0.0202, -0.2558, -0.2669, -0.1606, -0.1784,
0.0666, 0.1534],
[ 0.1977, 0.0073, -0.0256, 0.1687, 0.2736, -0.2341, -0.0254, -0.1233,
-0.1083, 0.1307],
[-0.3091, -0.1185, 0.2292, -0.2904, 0.1551, -0.1073, 0.0901, 0.0815,
0.0563, -0.1869],
[ 0.1131, 0.1455, -0.1215, -0.2023, -0.1883, -0.1709, -0.0097, 0.2165,
-0.1549, 0.0916],
[-0.0114, -0.2245, 0.1819, -0.2465, 0.1708, 0.0840, -0.3031, -0.0886,
0.2049, 0.1661],
[-0.0540, -0.1216, -0.1092, 0.1388, 0.2321, -0.1198, -0.1509, 0.2244,
0.0655, 0.2590]], device='cuda:0')
The parameter that produces a key error is net.module.module_list[0].weight. I’m guessing the parameters() function of DataParallel is called when replicating, while the overloaded parameters() function is called when creating a param index dict or something? |
st100230 | Your Net class doesn’t have the forward method implemented.
Could you add the method and try it again?
Was this code running on a single GPU? |
st100231 | Oh yea your right, I forgot to add the forward call in the conceptual class! Disregard the first error then.
The second error is what was originally my problem. I have tried running it on two GTX 1080ti as well as two different nvidia GPUs.
The code was originally running on a single GPU without using DataParallel. Using DataParallel with a single GPU has no effect, the code runs fine. |
st100232 | # Python 3.6
# Pytorch 4.1 installed via anaconda
import torch
from torch import nn
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.module_list = nn.ModuleList()
self.module_list.append(nn.Linear(2, 2))
self.module_list.append(nn.Linear(2, 2))
def parameters(self, only_trainable=True):
for param in self.module_list.parameters():
if only_trainable and not param.requires_grad:
continue
yield param
def forward(self, x):
return x
net = Net().cuda()
for p in net.module_list[0].parameters():
p.requires_grad = False
net = nn.DataParallel(net, [0, 1])
net(torch.rand(10, 2))
Produces the same error
---------------------------------------------------------------------------
KeyError Traceback (most recent call last)
<ipython-input-3-dc513201071a> in <module>()
24 p.requires_grad = False
25 net = nn.DataParallel(net, [0, 1])
---> 26 net(torch.rand(10, 2))
~/anaconda3/lib/python3.6/site-packages/torch/nn/modules/module.py in __call__(self, *input, **kwargs)
475 result = self._slow_forward(*input, **kwargs)
476 else:
--> 477 result = self.forward(*input, **kwargs)
478 for hook in self._forward_hooks.values():
479 hook_result = hook(self, input, result)
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in forward(self, *inputs, **kwargs)
120 if len(self.device_ids) == 1:
121 return self.module(*inputs[0], **kwargs[0])
--> 122 replicas = self.replicate(self.module, self.device_ids[:len(inputs)])
123 outputs = self.parallel_apply(replicas, inputs, kwargs)
124 return self.gather(outputs, self.output_device)
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py in replicate(self, module, device_ids)
125
126 def replicate(self, module, device_ids):
--> 127 return replicate(module, device_ids)
128
129 def scatter(self, inputs, kwargs, device_ids):
~/anaconda3/lib/python3.6/site-packages/torch/nn/parallel/replicate.py in replicate(network, devices, detach)
50 replica._parameters[key] = None
51 else:
---> 52 param_idx = param_indices[param]
53 for j in range(num_replicas):
54 replica = module_copies[j][i]
KeyError: Parameter containing:
tensor([[-0.4050, 0.0905],
[-0.1446, -0.5699]], device='cuda:0') |
st100233 | I still think the error is related to the fact that you are not replicating all Parameters, thus these are missing in the replicas.
If you only specify one GPU for DataParallel, the module will just be called without replication (line of code 19).
Maybe I’m not understanding your use case, but currently only the parameters requiring gradients will be replicated, which would create incomplete models. |
st100234 | There isn’t really any issue, I solved it by setting only_trainable=False in my parameters function so it would behave exactly like normal nn.Module.parameters() function. I was initially curious as to why it didn’t work before so I created some example code and forgot to implement forward which made me think there was some other issue (until you pointed it out). So I got answers to all my questions, thanks for the help! |
st100235 | Hi, I have installed torchvison using ‘pip install --no-deps torchvision’ (pip3 install torchvision will cause error), but pycharm cannot import. pycharm can import torch correctly.
I have already installed anaconda. Both torch and torchvision cannot be imported in IDLE.
The paths are: ‘Requirement already satisfied: torchvision in c:\users*****\appdata\roaming\python\python37\site-packages (0.2.1)’
‘C:\Users\Dongliang\Anaconda3\Scripts’
‘C:\Program Files (x86)\Python37-32\Scripts;C:\Program Files (x86)\Python37-32;’
The python in my computer is 3.7, and I installed anaconda with 3.6, will this be a problem. Isn’t anaconda with 3.6 the latest version?
image.png973×235 17.9 KB |
st100236 | Solved by robo in post #3
Thanks,the problem has been soloved. I installed python3.7 before anaconda, so there is the problem that python3.7 cannot use Torch and torchvision.
because of the python3.7, when I install torchvision using ‘pip install --no-deps torchvision’, torchvision seems not install correctly with anacond… |
st100237 | Since there’re so many Python distributions installed, I wonder whether these environments are messed up. Does pip and python come from the same Python distribution? You can check this through the following commands.
where python.exe
where pip.exe
As for IDLE, make sure you set it to the Python distribution that has PyTorch in it. |
st100238 | Thanks,the problem has been soloved. I installed python3.7 before anaconda, so there is the problem that python3.7 cannot use Torch and torchvision.
because of the python3.7, when I install torchvision using ‘pip install --no-deps torchvision’, torchvision seems not install correctly with anaconda. I think that is the reason I can not use torchvison. |
st100239 | Can someone please point me to a step-by-step guide for building Pytorch from source in Windows OS and
the Anaconda environment? All the guides I’m finding are either out of date or not clear enough for me to understand. Thanks.
Edit: I should mention that I have already successfully installed Pytorch via the conda route, but I need to have the most up to date (unreleased) version of Pytorch so I wish to build from source. |
st100240 | I followed the steps in the guide, but my build ultimately failed with the following errors:
c:\users\wagne134\anaconda3\include\pyconfig.h(59): fatal error C1083: Cannot open include file: ‘io.h’: No such file or directory
error: command ‘C:\Program Files (x86)\Microsoft Visual Studio\2017\Enterprise\VC\Tools\MSVC\14.11.25503\bin\HostX64\x64\cl.exe’ failed with exit status 2 |
st100241 | wagne134:
Cannot open include file: ‘io.h’: No such file or directory
Could you give me the result of the following command?
echo %INCLUDE%
echo %LIB%
I’m afraid that you didn’t install the MSVC 14.11 toolset. |
st100242 | I was trying to build a language model but got error THIndexTensor_(size)(target, 0) == batch_size. Here is the code
import numpy as np
import torch
from torch.autograd import Variable
import torch.nn as nn
data = '...'
words = list(set(data))
word2ind = {word: i for i, word in enumerate(words)}
ind2word = {i: word for i, word in enumerate(words)}
class RNN(nn.Module):
def __init__(self, input_size, hidden_size, output_size):
super(RNN, self).__init__()
self.hidden_size = hidden_size
self.in2h = nn.Linear(input_size-1+hidden_size, hidden_size)
self.in2o = nn.Linear(input_size-1+hidden_size, output_size)
self.o2o = nn.Linear(hidden_size+output_size, output_size)
self.softmax = nn.LogSoftmax()
def forward(self, inputs, hidden):
input_combined = torch.cat((inputs.float(), hidden.float()), 1)
print(type(input_combined.data))
hidden = self.in2h(input_combined)
output = self.in2o(input_combined)
output_combined = torch.cat((hidden, output), 1)
output = self.o2o(output_combined)
output = self.softmax(output)
print(output)
return output, hidden
def init_hidden(self):
return Variable(torch.from_numpy(np.zeros((1, self.hidden_size))).type(torch.LongTensor))
def form_onehot(sent):
one_hot = np.zeros((len(data), len(words)), dtype=np.int64)
for i, word in enumerate(sent):
one_hot[i, word2ind[word]] = 1
return torch.LongTensor(one_hot)
def random_choice(vec):
return np.random.choice(range(len(words)), p=vec)
def train(rnn, learning_rate, optimizer, criterion, input_tensor, target_tensor):
hidden = rnn.init_hidden()
optimizer.zero_grad()
for i in range(input_tensor.size(1)):
output, hidden = rnn(input_tensor[i, :].unsqueeze(0), hidden)
loss = criterion(output, target_tensor[i])
loss.backward()
optimizer.step()
return output, loss.data[0] / input_tensor.size()[0]
onehot_data = form_onehot(data)
rnn = RNN(len(words), 10, len(words))
learning_rate = 0.1
criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(rnn.parameters(), lr=learning_rate)
input_tensor = Variable(onehot_data[:, :-1].type(torch.FloatTensor))
print(type(input_tensor.data))
target_tensor = Variable(onehot_data[:, 1:])
int_target_tensor = Variable(onehot_data[1:, :].type(torch.LongTensor))
output, loss = train(rnn, learning_rate, optimizer, criterion, input_tensor, int_target_tensor)
And here is the error details:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-449-8abc91b616c7> in <module>()
----> 1 output, loss = train(rnn, learning_rate, optimizer, criterion, input_tensor, int_target_tensor)
<ipython-input-445-72363097fc21> in train(rnn, learning_rate, optimizer, criterion, input_tensor, target_tensor)
52 output, hidden = rnn(input_tensor[i, :].unsqueeze(0), hidden)
53 print(output.size(), target_tensor[i].size())
---> 54 loss = criterion(output, target_tensor[i])
55 print('aaaaaaaaaaa')
56 loss.backward()
D:\Anaconda3\lib\site-packages\torch\nn\modules\module.py in __call__(self, *input, **kwargs)
204
205 def __call__(self, *input, **kwargs):
--> 206 result = self.forward(*input, **kwargs)
207 for hook in self._forward_hooks.values():
208 hook_result = hook(self, input, result)
D:\Anaconda3\lib\site-packages\torch\nn\modules\loss.py in forward(self, input, target)
319 _assert_no_grad(target)
320 return F.cross_entropy(input, target,
--> 321 self.weight, self.size_average)
322
323
D:\Anaconda3\lib\site-packages\torch\nn\functional.py in cross_entropy(input, target, weight, size_average)
535 for each minibatch.
536 """
--> 537 return nll_loss(log_softmax(input), target, weight, size_average)
538
539
D:\Anaconda3\lib\site-packages\torch\nn\functional.py in nll_loss(input, target, weight, size_average)
503 else:
504 raise ValueError('Expected 2 or 4 dimensions (got {})'.format(dim))
--> 505 return f(input, target)
506
507
D:\Anaconda3\lib\site-packages\torch\nn\_functions\thnn\auto.py in forward(self, input, target)
39 output = input.new(1)
40 getattr(self._backend, update_output.name)(self._backend.library_state, input, target,
---> 41 output, *self.additional_args)
42 return output
43
RuntimeError: Assertion `THIndexTensor_(size)(target, 0) == batch_size' failed. at d:\downloads\pytorch-master-1\torch\lib\thnn\generic/ClassNLLCriterion.c:50 |
st100243 | Hi,
I think problem is that you are missing the batch dimension on your target_tensor.
The error says that the size of the 0th dimension is not equal to the batch size.
Try changing this: loss = criterion(output, target_tensor[i].unsqueeze(0)). |
st100244 | Thank you for your reply. But I don’t think it works, it raised an error:
RuntimeError: multi-target not supported at d:\downloads\pytorch-master-1\torch\lib\thnn\generic/ClassNLLCriterion.c:20
I think it is because I unsqueezed the target, and torch regards it as a muti-target.
And after using unsqueeze, I printed the output.size() and target.size(), and got torch.Size([1, 1139]), torch.Size([1, 1139]), respectively. |
st100245 | Your output should have one more dimension than the target corresponding to a score for each label and the target should just contain the index of the correct label. |
st100246 | Yeah, I mean before using unsqueeze, I got torch.Size([1, 1139]), torch.Size([1139]), which is right I think. But it raised THIndexTensor_(size)(target, 0) == batch_size. And I didn’t try to use batch here. |
st100247 | Pytorch always use batches (even if it means having a first dimension of size 1).
If you have a single element with 1139 possible label, then output should be 1x1139 and target should be a LongTensor of size 1 (containing the index of the correct label). |
st100248 | Thank you so much, man!! It just worked. But there is another error, can you help me with this? :light_smile:
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-75-95f5d8615326> in <module>()
----> 1 output, loss = train(rnn, learning_rate, optimizer, criterion, input_tensor, target)
<ipython-input-71-ebb92fd662bb> in train(rnn, learning_rate, optimizer, criterion, input_tensor, target_tensor)
51 print(output.size(), target_tensor[i])
52 loss = criterion(output, target_tensor[i])
---> 53 loss.backward()
54 optimizer.step()
55 return output, loss.data[0] / input_tensor.size()[0]
D:\Anaconda3\lib\site-packages\torch\autograd\variable.py in backward(self, gradient, retain_variables)
142 raise TypeError("gradient has to be a Tensor, Variable or None")
143 gradient = Variable(gradient, volatile=True)
--> 144 self._execution_engine.run_backward((self,), (gradient,), retain_variables)
145
146 def register_hook(self, hook):
D:\Anaconda3\lib\site-packages\torch\autograd\function.py in apply(self, *args)
88
89 def apply(self, *args):
---> 90 return self._forward_cls.backward(self, *args)
91
92
D:\Anaconda3\lib\site-packages\torch\nn\_functions\linear.py in backward(ctx, grad_output)
19 @staticmethod
20 def backward(ctx, grad_output):
---> 21 input, weight, bias = ctx.saved_variables
22
23 grad_input = grad_weight = grad_bias = None
RuntimeError: Trying to backward through the graph second time, but the buffers have already been freed. Please specify retain_variables=True when calling backward for the first time. |
st100249 | The problem here is in the way you use Variable.
Basically as soon as you start using a Variable, it will create an history of all the computations you do with it to be able to get gradients.
So for elements that do not need gradients, you want to create it as late as possible. Keep in mind that creating a Variable is completely free so you can do it (and should do it) in your inner loop of training.
In you case, you should not wrap your whole dataset in a single Variable and then slice it in your training loop but have input_tensor = onehot_data[:, :-1].type(torch.FloatTensor) and in your training loop net_input = Variable(input_tensor[i, :].unsqueeze(0)). And the same for the target.
The error that you saw is because of memory optimization, when you backpropagate through the graph, all intermediary buffers are freed. If you try and call backward again on the same graph (or a subset of it in your case) then it cannot run the backward because some of these data have been freed. In your case, the problem is that when you call loss.backward(), it backpropagates all the way to the full dataset tensor, and at the next step, the same part of the graph that goes from the full dataset to your sample is reused but the buffers have been freed already. Changing the moment where you package into Variable as proposed above will solve this problem. |
st100250 | If I may post on this thread, I’m having a similar issue as the original one so I thought I’d post it here rather than creating a new thread.
My training code looks as follows:
model.train()
train_loss = []
train_accu = []
i = 0
for epoch in range(30):
for data, target in test_loader:
print(target.shape)
print(data.view(batch_size,1,64,64).shape)
data, target = Variable(data), Variable(target)
optimizer.zero_grad()
output = model(data.view(batch_size,1,64,64))
print(output.shape)
loss = F.nll_loss(output, target.view(batch_size)) # Negative log likelihood (goes with softmax).
loss.backward() # calc gradients
train_loss.append(loss.data[0]) # Calculating the loss
optimizer.step() # update gradients
prediction = output.data.max(1)[1] # first column has actual prob.
accuracy = (prediction.eq(target.data).sum()/batch_size)*100
train_accu.append(accuracy)
if i % 10 == 0:
print('Epoch:',str(epoch),'Train Step: {}\tLoss: {:.3f}\tAccuracy: {:.3f}'.format(i, loss.data[0], accuracy))
i += 1
giving:
torch.Size([3])
torch.Size([3, 1, 64, 64])
torch.Size([12, 12])
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-334-ce8b8adb782b> in <module>()
11 output = model(data.view(batch_size,1,64,64))
12 print(output.shape)
---> 13 loss = F.nll_loss(output, target.view(batch_size)) # Negative log likelihood (goes with softmax).
14 loss.backward() # calc gradients
15 train_loss.append(loss.data[0]) # Calculating the loss
~/anaconda3/lib/python3.6/site-packages/torch/nn/functional.py in nll_loss(input, target, weight, size_average, ignore_index, reduce)
1047 weight = Variable(weight)
1048 if dim == 2:
-> 1049 return torch._C._nn.nll_loss(input, target, weight, size_average, ignore_index, reduce)
1050 elif dim == 4:
1051 return torch._C._nn.nll_loss2d(input, target, weight, size_average, ignore_index, reduce)
RuntimeError: Assertion `THIndexTensor_(size)(target, 0) == batch_size' failed. at /opt/conda/conda-bld/pytorch-cpu_1515613813020/work/torch/lib/THNN/generic/ClassNLLCriterion.c:79
The last layer of my CNN outputs 12 numbers:
class Net(nn.Module):
def __init__(self):
super(Net,self).__init__()
self.conv1 = nn.Conv2d(1,32,5,padding=2) # 1 input, 32 out, filter size = 5x5, 2 block outer padding
self.conv2 = nn.Conv2d(32,64,5,padding=2) # 32 input, 64 out, filter size = 5x5, 2 block padding
self.fc1 = nn.Linear(64*8*8,1024) # Fully connected layer
self.fc2 = nn.Linear(1024,12) #Fully connected layer 2 out.
def forward(self,x):
x = F.max_pool2d(F.relu(self.conv1(x)), 2) # Max pool over convolution with 2x2 pooling
x = F.max_pool2d(F.relu(self.conv2(x)), 2) # Max pool over convolution with 2x2 pooling
x = x.view(-1,64*8*8) # tensor.view() reshapes the tensor
x = F.relu(self.fc1(x)) # Activation function after passing through fully connected layer
# x = F.dropout(x, training = self.training) #Dropout regularisation
x = self.fc2(x) # Pass through final fully connected layer
output= F.log_softmax(x,dim=1) # Give results using softmax
return output
model = Net()
model.apply(weight_init)
model = model.double()
print(model)
But for some reason in my mind (given the above thread) my target should have size (3,12) if it is going to match the batch size issue.
Does anyone have any ideas as to how to fix this problem? |
st100251 | Your .view has the wrong dimensions.
Based on your input size, it should be x = x.view(-1, 64*16*16)
or alternatively x = view(x.size(0), -1).
Since you are pooling twice with kernel_size=2 and stride=2, your height and width will be reduced to 64/2/2 = 16.
Therefore, you also have to change the in_features of fc1 to 64*16*16. |
st100252 | Thank you for replying Ah yes, you’re quite right! Now I’m getting this error however:
RuntimeError: size mismatch, m1: [3 x 16384], m2: [4096 x 1024] at /opt/conda/conda-bld/pytorch-cpu_1515613813020/work/torch/lib/TH/generic/THTensorMath.c:1416 |
st100253 | Sorry, I was too fast posting. I’ve added the note, that you also have to change the in_features of fc1 to 64*16*16. |
st100254 | That did the job! Thanks so much! Admittedly I should have noticed that last point myself too |
st100255 | Hi,
I’m trying to concatenate two layers as below.
class Net1(nn.Module):
def __init__(self, num_classes=2):
super(Net1, self).__init__()
self.conv1 = nn.Conv2d(in_channels=3, out_channels=10, kernel_size=3, stride=1, padding=1)
self.relu1 = nn.ReLU()
self.conv2 = nn.Conv2d(in_channels=10, out_channels=10, kernel_size=3, stride=1, padding=1)
self.relu2 = nn.ReLU()
self.pool = nn.MaxPool2d(kernel_size=2)
self.conv3 = nn.Conv2d(in_channels=10, out_channels=10, kernel_size=3, stride=1, padding=1)
self.relu3 = nn.ReLU()
self.conv4 = nn.Conv2d(in_channels=10, out_channels=10, kernel_size=3, stride=1, padding=1)
self.relu4 = nn.ReLU()
self.fc = nn.Linear(in_features=112 * 112 * 10, out_features=num_classes)
def forward(self, input):
output1 = self.conv1(input)
output2 = self.relu1(output1)
output3 = self.conv2(output2)
output4 = self.relu2(output3)
output5 = self.pool(output4)
output6 = self.conv3(output5)
output7 = self.relu3(output6)
output8 = self.conv4(output7)
output9 = self.relu4(output8)
#output = torch.cat((self.conv4(output),self.conv3(output)), 1)
output10 = torch.cat((output9,output7), 1)
#output10 = output9.view(-1, 112 * 112 * 10)
output11 = output10.view(-1, 112 * 112 * 10)
output12 = self.fc(output11)
return output12
net1 = Net1()
print(net1)
But I got this error after adding torch.cat layer
ValueError: Expected input batch_size (8) to match target batch_size (4). |
st100256 | Your model works with an input size of [batch_size, 3, 224, 224].
Could you post the Dataset or generally how you load and process the data?
Based on the error message it seems there is a mismatch between your data and target.
PS: I’ve formatted your post. You can add code snippets using three backticks. |
st100257 | Thank you
Here is my data loading part
data_dir_train= 'cross_vali/train'
data_dir_val = 'cross_vali/val'
transform = transforms.Compose(
[transforms.Resize(224),
transforms.ToTensor()])
#transforms.Normalize((76.02, 34.22, 37.86), (52.76, 6.61, 28.19))])
trainset = torchvision.datasets.ImageFolder(root=data_dir_train,
transform=transform)
trainloader = torch.utils.data.DataLoader(trainset, batch_size=4,
shuffle=True, num_workers=2)
testset = torchvision.datasets.ImageFolder(root=data_dir_val,
transform=transform)
testloader = torch.utils.data.DataLoader(testset, batch_size=4,
shuffle=False, num_workers=2)
classes = ('MCIc','MCIs') |
st100258 | Thanks for the code. Could you print the shape of output10 before the torch.cat operation in forward? |
st100259 | Hi everyone,
I got a similar issue :
Expected input batch_size (896) to match target batch_size
My code is the following:
import torch
import torch.nn as nn
import torchvision.models as models
class EncoderCNN(nn.Module):
def __init__(self, embed_size):
super(EncoderCNN, self).__init__()
resnet = models.resnet50(pretrained=True)
for param in resnet.parameters():
param.requires_grad_(False)
modules = list(resnet.children())[:-1]
self.resnet = nn.Sequential(*modules)
self.embed = nn.Linear(resnet.fc.in_features, embed_size)
def forward(self, images):
features = self.resnet(images)
features = features.view(features.size(0), -1)
features = self.embed(features)
return features
class DecoderRNN(nn.Module):
def __init__(self, embed_size, hidden_size, vocab_size, num_layers=1):
super(DecoderRNN, self).__init__()
self.num_layers = num_layers
self.hidden_size = hidden_size
self.embed_size= embed_size
self.drop_prob= 0.2
self.vocabulary_size = vocab_size
#Define LSTSM
self.lstm = nn.LSTM(self.embed_size, self.hidden_size , self.num_layers,batch_first=True)
self.dropout = nn.Dropout(self.drop_prob)
self.embed = nn.Embedding(self.vocabulary_size, self.embed_size)
self.linear = nn.Linear(hidden_size, vocab_size)
def forward(self, features, captions):
#generating embedings from captures labels
embeddings = self.embed(captions)
#Concatenate captions embedidings and images features in one dimension array
embeddings = torch.cat((features.unsqueeze(1), embeddings), 1)
print("Embeddings",embeddings.shape)
#Pack in sequences to create several batches with sequence length vocabulary size
#packed = torch.nn.utils.rnn.pack_padded_sequence(embeddings, self.vocabulary_size,batch_first= True)
#LSTM return hidden states and output of LSTM layers (score telling how near we are from finding the right word sequence)
hiddens, c = self.lstm(embeddings)
self.dropout(hiddens)
#Regression that feed to the next LSTM cell and contains the previous state
outputs = self.linear(hiddens)
return outputs
def sample(self, inputs, states=None, max_len=20):
" accepts pre-processed image tensor (inputs) and returns predicted sentence (list of tensor ids of length max_len) "
sampled_ids = []
inputs = inputs.unsqueeze(1)
print("INPUT",inputs.shape)
for i in range(max_len):
#LSTM cell h, c
hidden, states = self.lstm(inputs,states)
outputs = self.linear(hiddens.squeeze(1))
#arg max probability per output in LSTM cell
_, predicted = outputs.max(1)
sampled_ids.append(predicted)
#Update Hidden state with new output to next LSTM cell
#How to tell if the index is word-vector index?
inputs = self.embed(predicted)
print("NEW_INPUT", inputs.shape)
inputs = inputs.unsqueeze(1)
sampled_ids = torch.stack(sampled_ids, 1) # sampled_ids: (batch_size, max_seq_length)
return sampled_ids
Thanks in advance,
Bruno |
st100260 | I’m trying to implement an ensemble model, as there are some independent models, I want to traing the models in parallel using torch.multiprocessing, However, I always get Too many open files error.
Here is a minimal example that reproduce the error:
import torch
import torch.nn as nn
from torch.multiprocessing import Pool
class MyModel:
def __init__(self):
self.nn = nn.Sequential(
nn.Linear(10, 10), nn.ReLU(),
nn.Linear(10, 10), nn.ReLU(),
nn.Linear(10, 10), nn.ReLU(),
nn.Linear(10, 10), nn.ReLU(),
nn.Linear(10, 10), nn.ReLU(),
nn.Linear(10, 10), nn.ReLU(),
nn.Linear(10, 10), nn.ReLU(),
nn.Linear(10, 10), nn.ReLU()
)
def train(self):
pass
class EnsembleModel:
def __init__(self, K):
self.K = K;
self.models = [MyModel() for i in range(self.K)]
def f(self, i):
return i
def train(self):
pool = Pool(processes = 3)
ret = pool.map(self.f, range(self.K))
print(ret)
md = EnsembleModel(15);
md.train()
And this is the error message:
/home/alaya/anaconda3/lib/python3.6/multiprocessing/reduction.py:153: RuntimeWarning: received malformed or improperly-truncated ancillary data
msg, ancdata, flags, addr = sock.recvmsg(1, socket.CMSG_LEN(bytes_size))
Process ForkPoolWorker-3:
Traceback (most recent call last):
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/process.py", line 258, in _bootstrap
self.run()
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/process.py", line 93, in run
self._target(*self._args, **self._kwargs)
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/pool.py", line 108, in worker
task = get()
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/queues.py", line 337, in get
return _ForkingPickler.loads(res)
File "/home/alaya/anaconda3/lib/python3.6/site-packages/torch/multiprocessing/reductions.py", line 151, in rebuild_storage_fd
fd = df.detach()
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 58, in detach
return reduction.recv_handle(conn)
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/reduction.py", line 182, in recv_handle
return recvfds(s, 1)[0]
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/reduction.py", line 172, in recvfds
raise RuntimeError('Invalid data received')
RuntimeError: Invalid data received
Traceback (most recent call last):
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 149, in _serve
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 50, in send
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/reduction.py", line 176, in send_handle
File "/home/alaya/anaconda3/lib/python3.6/socket.py", line 460, in fromfd
OSError: [Errno 24] Too many open files
Traceback (most recent call last):
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 142, in _serve
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/connection.py", line 453, in accept
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/connection.py", line 593, in accept
File "/home/alaya/anaconda3/lib/python3.6/socket.py", line 205, in accept
OSError: [Errno 24] Too many open files
Traceback (most recent call last):
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/resource_sharer.py", line 142, in _serve
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/connection.py", line 453, in accept
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/connection.py", line 593, in accept
File "/home/alaya/anaconda3/lib/python3.6/socket.py", line 205, in accept
OSError: [Errno 24] Too many open files
Exception in thread Thread-1:
Traceback (most recent call last):
File "/home/alaya/anaconda3/lib/python3.6/threading.py", line 916, in _bootstrap_inner
self.run()
File "/home/alaya/anaconda3/lib/python3.6/threading.py", line 864, in run
self._target(*self._args, **self._kwargs)
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/pool.py", line 405, in _handle_workers
pool._maintain_pool()
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/pool.py", line 246, in _maintain_pool
self._repopulate_pool()
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/pool.py", line 239, in _repopulate_pool
w.start()
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/process.py", line 105, in start
self._popen = self._Popen(self)
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/context.py", line 277, in _Popen
return Popen(process_obj)
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/popen_fork.py", line 19, in __init__
self._launch(process_obj)
File "/home/alaya/anaconda3/lib/python3.6/multiprocessing/popen_fork.py", line 65, in _launch
parent_r, child_w = os.pipe()
OSError: [Errno 24] Too many open files |
st100261 | I’m getting your code to run fine, although I changed one thing:
pool = Pool(processes = 3)
ret = pool.map(self.f, range(self.K))
print(ret)
to
with Pool(processes = 3) as pool:
ret = pool.map(self.f, range(self.K))
print(ret)
Dont forget to close your pools if you dont use a with statement |
st100262 | Ah, I can’t replicate the error with your code.
I had the same error when I was running too many processes that I had not closed. If this is not the case you can try increasing the maximum amount by running ulimit -n 2048 if you are on Linux and have root privileges. Change 2048 to something that fits you. To see the current amount, run ulimit -n. |
st100263 | I am using F.log_softmax+nn.NLLLoss and after about 150 epochs both my training and validation are flat around nearly 0
Normally with cross entropy i’d expect the validation curve to go back up (over-fitting) but that is not happening here.
If i understand this set up correctly, the higher the correct prediction confidence is, the closer to 0 it gets?
Does this mean reaching zero is ideal?
Does the fact that my model is trending close to zero mean I should stop training?
How do I explain the validation curve not going back up (away from zero)
Data set is:
9 classes, balanced representation, ~49,000 samples
I did look before I posted, couldn’t find the answer |
st100264 | Your model is perfectly fitting the distribution in the training and validation set.
I don’t want to be pessimistic, but I think something might be wrong.
Could you check the distribution of your labels?
Could it be that your DataLoader is somehow returning the same labels over and over again? |
st100265 | I will post some code later, but in general this is what happens…
If i let this go for longer both training and validation would will stagnate, training at maybe 0.05 and validation at 0.08
Once they hit those numbers (5 and 8) they never improve or get worse.
My concern is that the validation loss never shoots back up |
st100266 | So the losses are not approx. zero, but a bit higher.
Are you using an adaptive optimizer, e.g. Adam? |
st100267 | Yes. Adam.
I have tried several different learning rates as well. They all ultimately result in the same or similar behavior |
st100268 | It might be Adam is reducing the per-parameter estimates so that your training stagnate and the val loss doesn’t blow up. You could try a different optimizer like SGD and try it again. |
st100269 | I changed the beta values in Adam and was able to get this
validation does continue to increase passed 100 epochs albeit slowly… training continues to drop slowly
I think also the model might be extremely sensitive to learning rate… this is 3e-5
lowering it starts the causes stranger behavior such as starting losses around 0.5/0.49
I attempted cyclical learning rate but it increases the execution time by hours
edit: |
st100270 | To get back to the original question:
F.log_softmax + nn.NLLLoss work exactly as raw logits + nn.CrossEntropyLoss.
I think this issue is now more of a general nature.
The validation loss might increase after a while. Sometimes your model just gets stuck and both losses stay the same for a (long) while.
I like to check for code or model bugs by using a small part of my training data and try to overfit my model on it, so that it reaches approx. 0 loss.
If that’s not possible with the current code, I will be looking for bugs or change the model architecture. |
st100271 | Hi guys, I want to use a CNN as a feature extractor. When defining a neural net with nn.Sequential, for example
self.features = nn.Sequential(OrderedDict({
'conv_1': nn.Conv2d(1, 10, kernel_size=5),
'conv_2': nn.Conv2d(10, 20, kernel_size=5),
'dropout': nn.Dropout2d(),
'linear_1': nn.Linear(320, 50),
'linear_2': nn.Linear(50, 10)
}))
I wonder if there is any way I can get a layer’s index by its name. This way, it would be easier for me to extract feature. Imagine what I want in the code below:
indx = original_model.features.get_index_by_name('conv_1')
feature = original_model.features[:indx](x)
A more general question would be “how to extract features at specific layers” in a pretrained model defined with nn.Sequential. I hope I make it clear. Hope you guys can help me, thank you! |
st100272 | Solved by ptrblck in post #2
Your get_index_by_name method could implement something like this small hack:
list(dict(features.named_children()).keys()).index('conv_2')
> 1 |
st100273 | Your get_index_by_name method could implement something like this small hack:
list(dict(features.named_children()).keys()).index('conv_2')
> 1 |
st100274 | I tried installing pytorch in my pc which has Debian. I get error stating that it is not a supported wheel on this platform both for with and without UCS2 versions of python 2.7
torch-0.2.0.post2-cp27-cp27m-manylinux1_x86_64.whl is not a supported wheel on this platform.
torch-0.2.0.post2-cp27-cp27mu-manylinux1_x86_64.whl is not a supported wheel on this platform.
$uname -a
Linux senior 3.16.0-4-amd64 #1 SMP Debian 3.16.7-ckt25-2+deb8u3 (2016-07-02) x86_64 GNU/Linux
$lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 8.5 (jessie)
Release: 8.5
Codename: jessie
$python --version
Python 2.7.9 |
st100275 | pip --version
That will reveal which version of python your pip is associated with, and then it’ll be easier to figure out the problem. |
st100276 | that’s really weird. from what you’ve shown so far, one of those wheels has to work. |
st100277 | ya it is indeed weird. I am able to install it in my laptop. Just doesn’t seem to work on my pc…
It is interesting to note that the below command worked:
pip install --user https://s3.amazonaws.com/pytorch/whl/cu75/torch-0.1.6.post22-cp27-none-linux_x86_64.whl
This is an older version though. I found this answer from one of your comments which actually solved the problem back in January for some other user. I can download this version. But if there is a fix I would obviously would love to install the latest version. |
st100278 | the obvious but hacky fix is to rename the file torch-0.2.0.post2-cp27-cp27mu-manylinux1_x86_64.whl to torch-0.2.0.post2-cp27-cp27mu-none_x86_64.whl |
st100279 | Stuck at same place. Wheel is not supported, while I’m using Linux Mint (Ubuntu 16.04) with Python 2.7
Screenshot from 2017-08-29 11-28-07.png1360×768 68.8 KB |
st100280 | I provided full instructions here:
"wheel not supported" for pip install
Torch CPU
!pip install http://download.pytorch.org/whl/cu75/torch-0.2.0.post1-cp27-cp27mu-manylinux1_x86_64.whl
!pip install torchvision
!pip install --upgrade http://download.pytorch.org/whl/cu75/torch-0.2.0.post1-cp27-cp27mu-manylinux1_x86_64.whl
!pip install --upgrade torchvision
Torch GPU
Build PyTorch from source
RUN git clone https://github.com/pytorch/pytorch.git
&& cd pytorch
&& git checkout 4eb448a051a1421de1dda9bd2ddfb34396eb7287
&& TORCH_CUDA_ARCH_LIST=“3.5 5.2 6.0 6.1+PTX”… |
st100281 | Hi.
changing to none for whl file didn’t work.
What did work was building from source. Also had to set environment variable No_Cuda to true. Just an information to pass on to others. |
st100282 | Hi, I am trying to implement a model of which the loss function depends on a pair of training data. Specifically, given dataset D={x_i}, the loss function will be E(f(x_i), f(x_j)), where f is the neural network model. I wonder if there is an efficient way to generate a stream of random pairs {x_i, x_j} for training?
Currently I have two following approaches in mind:
preprocess D to get the complete pair set containing N(N-1)/2 pairs (N is number of data in D) and then feed them into the model. The problem is the memory usage (even at preprocessing stage) scales with N^2 which is not tractable.
sample {i,j} index at runtime for each pair, and manually pass the index to D (without using DataLoader) to get data for each batch. But it introduces significant overhead and can be inefficient.
I am wondering if there exists some sort of “generator” that could handle this sampling problem efficiently. Any suggestion will be appreciated. Thanks! |
st100283 | I am new to pytorch. I have a sparse tensor and a full one and I want to find the similarity between these two binary tensors. I tried binary_cross_entropy_with_logits but it isn’t for a sparse and a full tensor and I couldn’t find anything else, so I tried to change sparse tensor to full, and tried to map sparse tensor to full one by casting (torch.FloatTensor), but it doesn’t work out. What can I do? |
st100284 | I am running the code in pytorch using gpu which produces a matrix that is fed to the matlab code where the other logic is implemented.
The code is working fine but takes 10 to 15 min for one epoch using 2 gpu’s. The probable reason might be the data transfer from pytorch cuda tensor to matlab variable and then running on matlab in cpu and from matlab to pytorch tensor.
I am curious to know that is there any way such that I can use the pytorch tensor as it is on matlab code and force the matlab to run gpu with that pytorch cuda tensor.
Any help is highly appreciated. |
st100285 | Hi!
I’m trying to implement the model presented on this paper by Uber: https://arxiv.org/pdf/1709.01907.pdf 3.
Long story short, I have a LSTM used to predict the next day of a time series, after consuming N days of data.
I have a function used to train that inference part, and another one to infer the next value on the series on test time. They are identical, but the second one doesn’t backpropagate or train the model. When I’m training the inference model the model quickly converges and I can extract it’s outputs at train time, they are something like that:
(removed image as new users can only post one image)
But when I run the model on test time, the output remains the same! And exactly equal to last output of the training phase!
I’ve tried everything on the last days, even rewrote the entire inference function to no success. I finally discovered that just by activating the optimizer step again the output starts to change. But the moment the test is done without optimization and grads the output freezes no matter the input, even with random vectors as input!
I’m really desperate, I would be very grateful even for some possible direction to tackle this problem from. Here is the code of the training function and the forecasting function (test time). Both of them use the inference part of the model!
FORECASTING FUNCTION (always same output)
def ForecastSequence1x12(encoder, forecaster, window_size, dev_pairs,num_stochastic):
with torch.no_grad():
# number of stochastic predictions MC dropout
B = num_stochastic
#encoder.eval()
total_loss = 0
outputs = []
real_values = []
hiddens = []
for iter in range(1,len(dev_pairs)):
list_predictions = []
input_tensor = dev_pairs[iter - 1][0]
target_tensor = dev_pairs[iter - 1][1]
encoder_hidden1 = encoder.initHidden()
_,(ht,ct) = encoder(
target_tensor[:window_size], encoder_hidden1, use_dropout=False)
hidden_and_input = torch.cat((ht[1].squeeze(),
ct[1].squeeze(),
input_tensor[window_size]
))
forecaster_output = forecaster(hidden_and_input ,use_dropout=False)
outputs += [forecaster_output.cpu().numpy()]
real_values += [target_tensor[window_size].cpu().numpy().squeeze()]
total_loss += (forecaster_output.cpu().numpy() - target_tensor[window_size].cpu().numpy().squeeze())**2
print(total_loss/len(dev_pairs))
return outputs,real_values
TRAINING FUNCTION
def TrainForecast(input_tensor, target_tensor, encoder, forecaster,
encoder_optimizer, forecaster_optimizer, criterion,
window_size):
encoder_optimizer.zero_grad()
forecaster_optimizer.zero_grad()
input_length = input_tensor.size(0)
target_length = target_tensor.size(0)
loss = 0
#print(torch.mean(target_tensor[:window_size]))
encoder_hidden = encoder.initHidden()
_,encoder_hidden = encoder(
target_tensor[:window_size], encoder_hidden, use_dropout=False)
# concatenate hidden state and input_tensor (exogenous variables to the time series)
hidden_and_input = torch.cat((encoder_hidden[0][1].squeeze(),
encoder_hidden[1][1].squeeze(),
input_tensor[window_size]))
#print(torch.mean(hidden_and_input))
#print("forecaster_input",hidden_and_input)
forecaster_output = forecaster(hidden_and_input,use_dropout=False)
#after all timesteps have been processed by the encoder, we check error only with last real target
loss = criterion(forecaster_output.squeeze(), target_tensor[window_size].squeeze())
#print(forecaster_output,target_tensor[days_window])
loss.backward()
encoder_optimizer.step()
forecaster_optimizer.step()
return (loss.item() / target_length), forecaster_output.detach().cpu().numpy().squeeze() |
st100286 | I just skimmed through your code, and stumbled over these lines:
list_predictions += [forecaster_output.cpu().numpy() ]#+ target_tensor[0].numpy()]
# pass list of lists with lists of B predictions
outputs += [list_predictions[0]]
Wouldn’t this just add the first prediction to outputs, while the new ones are appended at the end?
This wouldn’t explain why your output changes, when the optimizer is called, so I probably miss something. |
st100287 | Yeah, you are right. When the model is functional I hope to use Monte Carlos Dropout so I would need multiple computations of the same prediction. But for now I’m just appending the first prediction to test. I will clean up the code on my post |
st100288 | I’m not sure, if that was the issue or not.
Do you get different predictions now? |
st100289 | Same thing… This wasn’t the problem. I’ve updated the code on my post without that part to make it less confusing. |
st100290 | Did you try to use the same data from training while testing, as a sanity check? |
st100291 | Yes! This is all done with the training data. I haven’t touched the dev data yet |
st100292 | Hi, does anyone know what batch_size for validation set means for following code:
indices = list(range(len(train_data)))
np.random.shuffle(indices)
split = int(np.floor(valid_size * len(train_data)))
train_idx, valid_idx = indices[split:], indices[:split]
train_loader = DataLoader(dataset, batch_size=50,
sampler=SubsetRandomSampler(train_idx))
valid_loader = DataLoader(dataset, batch_size=50,
sampler=SubsetRandomSampler(valid_idx))
Does it mean that only 50 data in the validation set are used for validation or the complete dataset are used but 50 data are evaluated each time?
Thanks! |
st100293 | Solved by albanD in post #2
Hi,
It means that the data will be drawn by batches of 50. As you usually can’t put the whole validation dataset at once in your neural net, you do it in minibatch, similarly as you do for training. |
st100294 | Hi,
It means that the data will be drawn by batches of 50. As you usually can’t put the whole validation dataset at once in your neural net, you do it in minibatch, similarly as you do for training. |
st100295 | Hi!, I’ve noticed that here 1 (in Chainer GAN lib) they use 1008 out classes while
in implementation provided by pytorch team used here 1, you use 1000 output classes. Messing around with comparing Inception scores I found that numbers do not match. I want to make some comparative study between models in their repo and mine (in pytorch). What would you suggest to do? By now I see the only way in somehow adopting chainer model, but I’d like to aviod it. |
st100296 | Hello
How important is it for the CNN that the data is perfectly “grid-like”?
I know CNN’s are great for images because the pixels in an image are placed like in a grid with constant distance to each other. This grid can be 1 dimensional as well as for a time series with constant update frequencies.
My data that are a time-series and should thus work with CNNs as well. Unfortunately the data is not precisely gathered with a constant interval. And is thus not perfectly in a grid.
It might look something like this: [6,6,6,6,20,20,20,10,10] etc. where the number is time since last data-sample. So first there are 6 seconds between the data samples, then there is 20 seconds etc.
Do i need to interpolate between my data samples so that it everything is updated with an interval of 6 seconds for my CNN to work? Or will it work just fine using the changing interval?
I don’t seem to be able to find much information on the subject. From deeplearningbook.org 1 under the CNN chapter they mention that its important that the data is “grid like”. However, they don’t comment on if its impossible to use CNNs if the data have irregular update frequencies. Just that its important.
Any insight or links to any information regarding this is very appreciated.
regards |
st100297 | Let me explain the objective first. Let’s say I have 1000 images each with an associated quality score [in range of 0-10]. Now, I am trying to perform the image quality assessment using CNN with regression(in pytorch). I have divided the images into equal size patches. Now, I have created a CNN network in order to perform the linear regression.
Following is the code: class MultiLabelNN(nn.Module):
class MultiLabelNN(nn.Module):
def __init__(self):
super(MultiLabelNN, self).__init__()
self.conv1 = nn.Conv2d(1, 32, 5)
self.pool = nn.MaxPool2d(2, 2)
self.conv2 = nn.Conv2d(32, 64, 5)
self.fc1 = nn.Linear(3200,1024)
self.fc2 = nn.Linear(1024, 512)
self.fc3 = nn.Linear(512, 1)
def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.pool(x)
x = self.conv2(x)
x = F.relu(x)
x = x.view(-1, 3200)
x = self.fc1(x)
x = F.relu(x)
x = self.fc2(x)
x = F.relu(x)
x = self.fc3(x)
return x
While running this code of network I am getting following error
input and target shapes do not match: input [400 x 1], target [200 x 1]
the target shape is [200x1] is because I have taken the batch size of 200. I found the solution that if I change “self.fc1 = nn.Linear(3200,1024)” and “x = x.view(-1, 3200)” here from 3200 to 6400 my code runs without any error.
Similarly, It will throw an error input and target shapes do not match: input [100 x 1], target [200 x 1] if I put 12800 instead of 6400
Now my doubt is that I am not able to understand the reason behind this. If I am giving 200 images as input to my network then why the input shape is getting affected while changing the parameters when I move from convolutional layer to fully connected layer. I hope I have clearly mentioned my doubt. Even though If anybody has any doubt please ask me. It will be a great help. Thanks in advance. |
st100298 | Could you print the shape of x just before the .view call?
I think 3200 is the wrong number here and thus your batch size will increase.
You are pushing “everything left” into the batch dimension using view(-1, 3200). So if x has 6400 features, the batch dimension will be doubled.
You could use x = x.view(x.size(0), -1) instead, which will give you a size mismatch error in the next linear layer. |
st100299 | I have a model (nn.module) class and inside this class i create a random tensor using torch.randn(). When I create a model.to(device) where device is cuda, the random tensor doesn’t get moved to cuda. Is the behavior correct? how should i do it, without passing device to the model.
Thanks |