id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st100100 | I encountered a strange issue and the simplified version of my codes are as below:
import torch
import torch.nn as nn
import math
class MyLinear(nn.Module):
def __init__(self, nin, nout):
super(MyLinear, self).__init__()
self.nout = nout
self.nin = nin
self.weight = nn.Parameter(torch.randn(self.nout, self.nin))
self.reset_parameters()
def reset_parameters(self):
stdv = 1. / math.sqrt(self.weight.size(1))
self.weight.data.uniform_(-stdv, stdv)
def forward(self, x):
# my_regularization = torch.abs(self.weight).mean().reshape(1)
my_regularization = torch.abs(self.weight).mean()
return torch.nn.functional.linear(x, self.weight), my_regularization
model = MyLinear(10, 1).cuda()
model = nn.DataParallel(model)
criterion = nn.MSELoss()
optimizer = torch.optim.SGD(model.parameters(), lr=0.01, weight_decay=0.01, momentum=0.1)
for i in range(100):
data = torch.randn(100, 10).cuda()
target = torch.randn(100, 1).cuda()
output, my_regularization = model(data)
print(output.shape, my_regularization.shape)
loss = criterion(output, target)
loss = loss + my_regularization
optimizer.zero_grad()
loss.backward()
optimizer.step()
I’ve implemented a special version of Linear layer with customized regularization term.
And my Linear layer return both the output and the regularization term which will be optimized as part of the loss function.
When I employ the nn.DataParallel to perform multi-GPUs training, the program gives an error:
RuntimeError: dimension specified as 0 but tensor has no dimensions
The error indicates that my_regularization term has no axis.
But when I reshape it from an scalar to vector with .reshape(1), another error occurs:
RuntimeError: grad can be implicitly created only for scalar outputs
Because my_regularization strangely has shape=[2], which I think should be [1].
I have 2 GPUs on board and my_regularization may be the concatenation of both.
But why don’t the shape of output change from [100, 1] to [200, 1] as well?
In my oppinion they are both output of MyLinear layer, so what causes their different behaviours?
Could you folks give me any hints about solving this issue? |
st100101 | I have the same error. Did you ever resolve it? To me, it seems like each gpu produces its own scalar output. |
st100102 | Make sure the model outputs are tensors, NOT scalars.
If you do need to output a scalar, reshape it with output.reshape([1]). |
st100103 | Hi,
I’m using the pytorch installed from source, and I got the error RuntimeError: cuda runtime error (9) : invalid configuration argument at /data/users/mabing/pytorch/aten/src/ATen/native/cuda/EmbeddingBag.cu:257 when run loss.backward().
And when i replace all cuda() with cpu(), it works perfectly.
Here is the test code, there may be some bugs exist in EmbeddingBag GPU codes.
import torch.optim as optim
import torch
import torch.nn as nn
import numpy as np
from scipy.special import expit
import os
import time
class SkipGramModel(nn.Module):
def __init__(self, component_size, word_size, dim):
super(SkipGramModel, self).__init__()
self.emb_size = dim
self.component_size = component_size
self.word_size = word_size
self.atten_layers = nn.Embedding(word_size,1)
self.u_embeddings = nn.EmbeddingBag(component_size,dim)
self.word_embeddings = nn.Embedding(word_size,dim,sparse=True)
self.v_embeddings = nn.Embedding(word_size,dim,sparse=True)
self.m = nn.Sigmoid()
self.init_emb()
def init_emb(self):
initrange = 0.5 / self.emb_size
self.word_embeddings.weight.data.uniform_(-initrange,initrange)
self.u_embeddings.weight.data.uniform_(-initrange, initrange)
self.v_embeddings.weight.data.uniform_(-0, 0)
atten = torch.zeros([self.word_size, 5])
atten[:, 0] += torch.log(torch.FloatTensor([4]))
self.atten_layers.weight.data = atten
def forward(self, word_in,component_in, word_out, offset):
char_in = torch.cuda.LongTensor(component_in[0])
redical_in = torch.cuda.LongTensor(component_in[1])
com1_in = torch.cuda.LongTensor(component_in[2])
com2_in = torch.cuda.LongTensor(component_in[3])
offset1 = torch.cuda.LongTensor(offset[0])
offset2 = torch.cuda.LongTensor(offset[1])
offset3 = torch.cuda.LongTensor(offset[2])
offset4 = torch.cuda.LongTensor(offset[3])
attention = torch.softmax(self.atten_layers(word_in),dim=-1).unsqueeze(1)
emb_uword = self.word_embeddings(word_in)
emb_char = self.u_embeddings(char_in,offset1)
emb_redical = self.u_embeddings(redical_in,offset2)
emb_com1 = self.u_embeddings(com1_in,offset3)
emb_com2 = self.u_embeddings(com2_in,offset4)
emb_all = torch.stack((emb_uword,emb_char,emb_redical,emb_com1,emb_com2),1)
emb_vword = self.v_embeddings(word_out)
emb_mixin = torch.bmm(attention,emb_all).squeeze(1)
score = torch.mul(emb_mixin, emb_vword)
score = torch.sum(score, dim=-1)
score = self.m(score)
return score
if __name__ == '__main__':
model = SkipGramModel(364, 180, 100).cuda()
optimizer = optim.SGD(model.parameters(), lr=0.025)
Lossfunc = nn.BCELoss(reduction='sum')
for _ in range(100):
word_in = torch.cuda.LongTensor([2]*128)
word_out = torch.cuda.LongTensor([2]*128)
label = torch.cuda.FloatTensor([1]*128)
component_in = [[3,5],[2,4,5],[2,3,4],[]]
offset = [[0]*127+[1],[0]*127+[1],[0]*128,[0]*128]
outs = model.forward(word_in, component_in, word_out, offset)
loss = Lossfunc(outs, label)
optimizer.zero_grad()
loss.backward()
optimizer.step() |
st100104 | I found that this error is caused by empty size input tensor of embeddingbag layers.
When i change component_in = [[3,5],[2,4,5],[2,3,4],[]] to component_in = [[3,5],[2,4,5],[2,3,4],[2]], it works.
But why pytorch with cuda() ( cpu() can run correctly) don’t support empty size input tensor for embeddingbag layers? |
st100105 | I was trying to access the doc just now but the webpage either appears blank or looks like plain html? Does anyone else have problem opening the docs? https://pytorch.org/docs/stable/index.html 4 |
st100106 | Hi, I wonder if there’s been a PyTorch implementation of,
Tunable Efficient Unitary Neural Networks (EUNN) 60
It’s something that definitely seems to be a solid piece of work !
@smth this seems like something FAIR must have in house already? You, Yann LeCun and Martin Arjovsky have been working on this for a quite a while if I remember correctly? |
st100107 | There isn’t a PyTorch implementation of this publicly available as far as I know. |
st100108 | Thanks!
It’s in Tensorflow by one of the authors, Li Jing,
https://github.com/jingli9111/EUNN-tensorflow 29
Compared to LSTM, at least this is mathematically interpretable !
Multi-layer bi-directional LSTM works great, but you can’t do any theory on it? |
st100109 | Late to the party, but I will leave this here for anyone who bumps into this conversation.
The last few days I have been working on a pytorch implementation which can be found here:
GitHub
flaport/torch_eunn 28
A Pytorch implementation of an efficient unitary neural network (https://arxiv.org/abs/1612.05231) - flaport/torch_eunn
The speed could probably be increased when PyTorch finally supports complex tensors |
st100110 | The PyTorch version 0.3.0 I am using gives me an error with the following line.
loss = torch.zeros(1).to(pred_traj_gt)
AttributeError: 'torch.FloatTensor' object has no attribute 'to'
What should I replace the code with ? This is someone else code I am trying to run to understand. My GPU only allows me to use 0.3.0 so I cannot upgrade PyTorch. What should I replace the code with to have the same functionality ? |
st100111 | I cannot test it because I don’t have a 0.3 env atm, but you should have a .type() method, which you should be able to call like .type(pred_traj_gt.dtype) or something similar. |
st100112 | I am adding a native function and having trouble using other utility methods w/in ATen. For example, see this simple native function 8
This is the quick test I’m using:
import torch
s = torch.sparse.DoubleTensor(
torch.LongTensor([[0,2,0], [0,2,1]]),
torch.FloatTensor([[1,2],[3,4],[5,6]]),
torch.Size([3,3,2]))
s.my_op()
Calling self._sparseDims() on line 153 works just fine as expected. This function is part of the public API and is dispatched to _sparseDims_sparse here 1, which calls _get_sparse_impl. And _get_sparse_impl is defined in this header, which is included in the file I’m defining my function in.
So the question is this: what magic am I missing to be able to use the same helper function in the same way in my native op? |
st100113 | For reference, this is the native function I added:
aten/src/ATen/native/TensorShape.cpp
Tensor my_op(const Tensor& self) {
if(self.is_sparse()){
printf("%ld\n", self._sparseDims()); // prints "2"
printf("%lld\n", _get_sparse_impl(self)->sparseDims()); // prints "140495002593472" on CPU; segfault on CUDA
}
}
native_functions.yaml
func: my_op(Tensor self) -> Tensor
variants: method |
st100114 | Hi there, I’m probably missing something simple, but can’t get it though.
The goal is to test the MNIST example with a custom image dataset:
from torchvision.datasets import ImageFolder
from torchvision.transforms import ToTensor
data = ImageFolder(root='PytorchTestImgDir', transform=ToTensor())
print(data.classes)
from torch.utils.data import DataLoader
train_loader = DataLoader(data, batch_size=1)
but the result is a:
RuntimeError: Need input of dimension 4 and input.size[1] == 1 but got input to be of shape: [1 x 3 x 138 x 138] at /Users/soumith/anaconda/conda-bld/pytorch-0.1.10_1488750409207/work/torch/lib/THNN/generic/SpatialConvolutionMM.c:47 |
st100115 | Looks like the MNIST example’s model expects one color channel (MNIST is black and white) but the images you’re providing are being loaded with three (RGB) channels. |
st100116 | Hi,
I would like to get some general ideas on exporting ONNX model when model accepts both input sequence and output labels as arguments. How would you set up Variables to traverse through the model to export ONNX?
Thanks. |
st100117 | I ran the following code, got an error.
Ryohei_Waitforit_Iwa:
a = torch.Tensor([[1, 2, 3], [4, 5, 6]])
torch.where(a[:, 0] == 1)
TypeError: where() missing 2 required positional argument: “input”, “other”
Numpy allows us to use where function without 2 arguments, but Pytorch not.
What I want to do is to select the rows corresponding to the value what I put.
I will really appreciate that if you would told me how to use this function. |
st100118 | From looking at numpy’s doc 6 when only a single argument is given, it is equivalent to condition.nonzero(). So just do (a[:, 0] == 1).nonzero() ? |
st100119 | Hello.
I think there is a bug in in-place bernoulli sampling. I put here the code that check for that. The code samples using in-place and non in-place mode.
import torch
import numpy
print "----BERNOULLI----"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.bernoulli_().numpy()
a=torch.zeros((10,))
print a.bernoulli_().numpy()
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.bernoulli_().numpy()
a=torch.zeros((10,))
print a.bernoulli_().numpy()
print "--------------------------"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print torch.bernoulli(a).numpy()
print torch.bernoulli(a).numpy()
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
print torch.bernoulli(a).numpy()
print torch.bernoulli(a).numpy()
print "----NORMAL----"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.normal_().numpy()
a=torch.zeros((10,))
print a.normal_().numpy()
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print a.normal_().numpy()
a=torch.zeros((10,))
print a.normal_().numpy()
print "--------------------------"
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print torch.normal(a).numpy()
print torch.normal(a).numpy()
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
print torch.normal(a).numpy()
print torch.normal(a).numpy() |
st100120 | I think, there is no sufficient documentation available for the APIs. According to the code here 3, the probability is taken as 0.5 in case if there is no parameter provided for p.
If you change the code as below, it seems to be giving same functionality as non in-place operator.
import torch
import numpy
torch.manual_seed(seed=1)
torch.cuda.manual_seed(seed=1)
a=torch.zeros((10,))
print(a.bernoulli_(a).numpy())
Note the parameter passed to bernoulli_(), |
st100121 | Yes, I agree with you. I think documentation should be more clear as if we follow torch.benoulli() docs it seems we fill vector “a” with probabilities taken from that vector, or at least that is what I understood and that is how it works torch.normal() |
st100122 | Agreed, both the in-place and non in-place version needs arguments for the probabilities, which in not clear in the doc |
st100123 | import torch
import numpy as np
a = np.zeros((3,3))
b = torch.from_numpy(a).type(torch.float)
AttributeError Traceback (most recent call last)
in ()
3
4 a = np.zeros((3,3))
----> 5 b = torch.from_numpy(a).type(torch.float)
AttributeError: module ‘torch’ has no attribute ‘float’ |
st100124 | I used this to install conda install pytorch=0.1.12 cuda75 -c pytorch following https://pytorch.org/previous-versions/ 15. Do you mean that torch.float and torch.FloatTensor are the same ? |
st100125 | For most purposes yes they are the same. torch.float did not exist in 0.1 though so you will need to upgrade pytorch to be able to use it. |
st100126 | I’m training a Transformer language model with 5000 vocabularies using a single M60 GPU (w/ actually usable memory about 7.5G).
The number of tokens per batch is about 8000, and the hidden dimension to the softmax layer is 512. In other words, the input to nn.Linear(256, 5000) is of size [256, 32, 256]. So, if I understand correctly, a fully-connected layer theoretically consumes 5000x8000x512x4=81.92GB of GPU memory for a forward pass (4 is for float32). But the GPU performed the forward and backward passes without any problem, and it says the GPU memory usage is less than 7GB in total.
What’s causing this? |
st100127 | Solved by samarth-robo in post #2
Memory for network parameters:
(256*5000 + 5000) * 4 * 2 = 10 Mbytes, where the factor of 2 is because the network has 1 tensor for weights and 1 tensor for gradients, and the additional 5000 is for biases.
Memory for data:
8192 * 512 * 4 * 2 = 32 Mbytes
So by those rough calculations, the memor… |
st100128 | Memory for network parameters:
(256*5000 + 5000) * 4 * 2 = 10 Mbytes, where the factor of 2 is because the network has 1 tensor for weights and 1 tensor for gradients, and the additional 5000 is for biases.
Memory for data:
8192 * 512 * 4 * 2 = 32 Mbytes
So by those rough calculations, the memory consumption for the softmax layer is roughly 42 Mbytes. |
st100129 | Thank you very much. Apparently, from your calculation I was calculating an excessive factor (5000) for “Memory for data” part. |
st100130 | Hi, what I understood from your answer is that the number of parameters (weights and biases) are stored twice in pytorch as per your Memory for network parameters. However, I didn’t quite get the Memory for data part. Shouldn’t the total calculation for a generic network in Pytorch be something like this so that it takes care of both the output features/activations at each layer of the network and the network parameters, each of which is stored twice?
total_gpu_usage = 2 x batch_size x (input_data_size + \
sum(feature_size_at_each_network_layer)) + 2 x parameter_size |
st100131 | What is the difference between
a) torch.from_numpy(a).type(torch.FloatTensor)
b) torch.from_numpy(a).type(torch.float) ?
When I installed PyTorch via the command conda install pytorch torchvision -c pytorch, (b) works. When I installed PyTorch via the command conda install pytorch=0.1.12 cuda75 -c pytorch on another PC, (b) does not work but (a) works. |
st100132 | Solved by albanD in post #2
Hi,
torch.float has been added recently, it was not in old releases like 0.1.xx, this is why it does not work. |
st100133 | Hi,
torch.float has been added recently, it was not in old releases like 0.1.xx, this is why it does not work. |
st100134 | The documentation for nn.CrossEntropyLoss states
The input is expected to contain scores for each class.
input has to be a 2D Tensor of size (minibatch, C).
This criterion expects a class index (0 to C-1) as the target for each value of a 1D tensor of size minibatch
However the following code appears to work:
loss = nn.CrossEntropyLoss()
input = torch.randn(15, 3, 10)
input = Variable(input, requires_grad=True)
target = torch.LongTensor(15,10).random_(3)
target = Variable(target)
output = loss(input, target)
So my input is a 3D tensor and my targets is a 2D tensor.
Am I right in thinking that CrossEntropyLoss is interpreting my input as minibatch, N_CLASSES, SEQ_LEN and my targets as minibatch, SEQ_LEN?
The reason I am trying to do this is that I am doing multiclass classification, each element of my minibatch is a sequence of 10 elements which can be classified into one of 3 classes. |
st100135 | It seems you are right.
I tested it with this small example:
loss = nn.CrossEntropyLoss(reduce=False)
input = torch.randn(2, 3, 4)
input = Variable(input, requires_grad=True)
target = torch.LongTensor(2,4).random_(3)
target = Variable(target)
output = loss(input, target)
loss1 = F.cross_entropy(input[0:1, :, 0], target[0, 0])
loss2 = F.cross_entropy(input[0:1, :, 1], target[0, 1])
loss1 and loss2 give the first two elements of output, so apparently it’s working. |
st100136 | Hi Vellamike,
I still don’t get why you would want to do that.
CrossEntropy simply compares the scores that your model outputs against a one-hot encoded vector where the 1 is in the index corresponding to the true label.
If your input are sequences of length 10, then you need to build a model that accepts 10 inputs and apply a tranformation into 3 outputs, which will be your feature vector or scores for that classes. Then you can apply softmax to normalize them.
Now is when you use CELoss to compute the diferrence between the output of your model and the true labels.
I hope it helps.
Pablo |
st100137 | @PabloRR100 Sorry for not answering earlier - I just saw your reply.
The reason I want to do that is that is I am doing a sequence-to-sequence network. My labels are sequences themselves - I have one label per sample in my sequence. So each sequence does not fall into one of 3 classes, each element of the sequence falls into one of three classes. So for a sequence of length 10 I have a rank two tensor (dim 3x10) - you can think of this as 10 one hot encoded vectors of length 3. |
st100138 | When the mini-batch size is 1, it’s often the case that building the model, calling outputs.backward() and optimizer.step() themselves are more time consuming than the actual gradient computation. Do you have any suggestions? I know the coming JIT support can potentially resolve the model building issue, but the other two steps are still significant… |
st100139 | Hi,
The jit will help both for model building and backward pass.
Unfortunately I don’t know of any way to speed up the optimizer.step() further. |
st100140 | Thanks! If backward() is also supported, then I think the doc 1 has put this wrong: It does not say it supports torch Variable type. |
st100141 | Hi,
Tensors and Variables have been merged a while ago now. So it supports Tensors, both the ones that requires_grad and the ones that don’t. |
st100142 | I originally use caffe, and now I have convert model trained by caffe to pytorch. And in caffe I use lmdb package the training images. I also read these lmdb in pytorch. But I do not know how to read the images from lmdb in batchsize. Now I only read one by one.
Could anybody give me some suggestion? |
st100143 | Hi,
I have some questions about to() methods of PyTorch when using device as the arguments
I have a class abc
class abc(nn.Module):
def __init__(self):
super(abc, self).__init__()
self.linear2 = nn.Linear(10,20)
self.linear1 = nn.Linear(20,30)
self.a_parmeters = [self.a]
def forward(self, inp):
h = self.linear1(inp)
return h
Then an object
net = abc()
two devices are available
gdevice = torch.device('cuda')
cdevice = torch.device('cpu')
And I also have two tensors
x=torch.randn(3,4)
gx = torch.randn(3,4,device=gdevice)
Now, x is on cpu device and gx is on gpu device
###Q1. If I assign x to CPU device (note that x is already on cpu), e.g. y = x.to(cdevice), is x and y are the same tensor? I mean, x and y are the same tensor with the same memory, only the name is different (like a reference)? If so, Does it mean we only add another name for x without allocating extra memory?
###Q2. Similar to Q1, if gy=gx.to(gdevice), Does it mean we only add another name for gx without allocating extra memory for gy?
###Q3. One strange thing is, if I send net to gpu by gnet = net.to(gdevice), net will also be on gpu device
In [98]: net = abc()
In [99]: net.linear1.weight.device
Out[99]: device(type='cpu')
In [100]: gnet = net.to(gdevice)
In [101]: gnet.linear1.weight.device
Out[101]: device(type='cuda', index=1)
In [102]: net.linear1.weight.device
Out[102]: device(type='cuda', index=1)
However, if I use to for tensor, the original tensor will not changed to the new device:
In [107]: x = torch.randn(3,4)
In [108]: x.device
Out[108]: device(type='cpu')
In [109]: gx = x.to(gdevice)
In [110]: gx.device
Out[110]: device(type='cuda', index=1)
In [111]: x.device
Out[111]: device(type='cpu')
Can anyone explain the difference between the tensor and the model object?
###Q4. How to make it transparent for pytorch code on cpu only computer and gpu available device? My idea is to use device variable
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
And use
net = Model().to(device)
where Model() is defined on cpu by default.
What I am not sure is whether this will allocate another copy of memory for net if the Model is defined on CPU and device is also CPU? Or some extra computation if the device is the same?
Is there some better way? I did find the solution on the forum and internet.
Thanks very much!
Yin |
st100144 | Q1 + Q2: if you don’t call .clone() or manually make a deepcopy, pytorch tries to use the same storage if possible. So the answer to both questions: usually the same storage will be used.
Q3: for instances of torch.nn.Module the changes are made in the self variable (python uses something similar to call by reference) and you wouldn’t have to reassign it at all. After this operation the self variable is returned to ensure a proper API. Since net and gnet are references to the same internal variable, changing one of them will also change the other one.
Q4:
If the model is just on the device you want it to push to, the .to() operations becomes a no-OP (just like changing the dtype or calling .cuda() or .cpu() directly.
So yes: usually the method you suggested is the way to go and usually CPU and GPU should cover nearly the same operations (if you don’t use very experimental ones) if you use only torch functions. |
st100145 | Pretty much the question in the title.
Someone asked this a year ago, but I think they did not receive a satisfying answer. None of the official pytorch examples use clean code.
For example, the pytorch DataLoader uses a batchsize parameter, but if someone writes their transformations in their dataset class, then that batchsize parameter is no longer adhered to, because the dataloader would then be generating a batch of size batchsize*however_many_transformations_are_applied_to_a_single_sample
Certainly this must have been thought of, so can someone please point me in the direction of a tutorial or example that addresses this discrepancy?
thanks! |
st100146 | Could you point to the examples where you have the feeling the code is not clean?
Usually random rotations, distortions etc. are used per-sample, so that the batch size stays the same.
There are a few exceptions, e.g. FiveCrop 6 which return five crops for a single image.
You can just use the torchvision.transforms 30 on a single image and return it.
I’m not sure, what the Keras generator does differently, so could you explain your use case a bit, e.g. what kind of transformation you want to use and which give you problems using them? |
st100147 | emcenrue:
in their dataset class, then that batchsize parameter is no longer adhered to, because the dataloader would then be generating a batch of size batchsize*however_many_transformations_are_applied_to_a_single_sample
I don’t think this is true. |
st100148 | Are you proposing that the dataset always should produce a single sample x, y for each call to getitem?
If so, how does one augment the dataset so that it incorporates random rotations/crops/shifts like here:
https://keras.io/preprocessing/image/ 30
?
The only solution I can see is that it will randomly select a sample and then randomly select a transformation, and then produce a single sample |
st100149 | I mentioned that I don’t think any of the examples are clean.
Also, if the random rotations/distortions/etc. are used per-sample, does that mean that the original sample could potentially never be used for training? In keras, the augmentation produces additional samples. Is this not the case for pytorch? In other words, is there any way to train on the original sample, as well as whatever transformations I want to apply to the data? For context, I don’t want to concatenate a transformed/modified dataset to the original dataset prior to training |
st100150 | The usual approach is to just implement the code to load and process one single sample, yes.
That makes it quite easy to write your own code as you don’t have to take care of the batching.
The DataLoader will take care of it even using multiprocessing.
If you want to apply multiple transformations on your data, you could just compose them:
data_transform = transforms.Compose([
transforms.RandomSizedCrop(224),
transforms.RandomHorizontalFlip(),
transforms.ToTensor(),
transforms.Normalize(mean=[0.485, 0.456, 0.406],
std=[0.229, 0.224, 0.225])
])
dataset = MyDataset(image_paths, transforms=data_transform)
The transformations won’t be randomly selected, but applied in the order you’ve created them.
If you want to pick a transformation randomly, you can use RandomChoice 8.
Otherwise the transformation will be applied in order as you pass them (or apply them in your Dataset).
If you would like to rotate your images before flipping them (for whatever reason), just change the order of your transforms.
emcenrue:
does that mean that the original sample could potentially never be used for training? In keras, the augmentation produces additional samples.
I think you are also wrong on this point.
Generate batches of tensor image data with real-time data augmentation.
This does not sound as if the original samples are created before the augmented ones.
As I’m not that familiar with Keras, feel free to correct me, but using this code I cannot get the original sample from the DataGenerator:
data_dir = './dummy_image/'
image = Image.open(data_dir + 'class0/dummy_image.jpg')
im_arr = np.array(image)
datagen = ImageDataGenerator(
rescale=None,
shear_range=0.2,
zoom_range=0.2,
horizontal_flip=True)
train_generator = datagen.flow_from_directory(
data_dir,
target_size=im_arr.shape[:-1],
batch_size=1,
class_mode='binary')
x_1, _ = train_generator.next()
f, axarr = plt.subplots(1, 2)
axarr[0].imshow(x_1[0].astype(np.uint8))
axarr[1].imshow(im_arr)
plt.show()
for idx, (x, y) in enumerate(train_generator):
x = x.astype(np.uint8).squeeze()
print('Iter {}, Abs error {}, x.min {}, x.max{}, im.min {}, im.max {}'.format(
idx, np.mean(np.abs(x-im_arr)), x.min(), x.max(), im_arr.min(), im_arr.max()))
if np.allclose(x, im_arr):
break
plt.imshow(np.abs(x-im_arr))
plt.show()
Note that I’ve created two folders (class0, class1) with the same single image inside both of them. |
st100151 | I want to create a mask tensor given a list of lengths. This would mean that there should be k ones and all other zeros for each row in the tensor.
eg:
input :
[2, 3, 5, 1]
output
[1 1 0 0 0
1 1 1 0 0
1 1 1 1 1
1 0 0 0 0]
Is the most efficient way as below or something similar?
seq_lens = torch.tensor([2,3,5,1])
max_len = torch.max(seq_lens)
mask_tensor = torch.Tensor()
for length in seq_lens:
new_row = torch.cat((torch.ones(length), torch.zeros(max_len-length))).unsqueeze(0)
mask_tensor = torch.cat((mask_tensor, new_row), 0)
print(mask_tensor) |
st100152 | Solved by justusschock in post #2
You could use binary masking to achieve this:
seq_lens = torch.tensor([2,3,5,1]).unsqueeze(-1)
max_len = torch.max(seq_lens)
# create tensor of suitable shape and same number of dimensions
range_tensor = torch.arange(max_len).unsqueeze(0)
range_tensor = range_tensor.expand(seq_lens.size(0), range_… |
st100153 | You could use binary masking to achieve this:
seq_lens = torch.tensor([2,3,5,1]).unsqueeze(-1)
max_len = torch.max(seq_lens)
# create tensor of suitable shape and same number of dimensions
range_tensor = torch.arange(max_len).unsqueeze(0)
range_tensor = range_tensor.expand(seq_lens.size(0), range_tensor.size(1))
# until this step, we only created auxiliary tensors (you may already have from previous steps)
# the real mask tensor is created with binary masking:
mask_tensor = (range_tensor >= seq_lens) |
st100154 | Hi,
TLDR: Is there a flag or some configuration option to make FULL_CAFFE2=1 python3 setup.py install link to a custom BLAS library for optimizations, instead of using Eigen?
For background, we have built a novel data-parallel accelerator, and have compiled an optimized BLAS library targeting this architecture. We would like to link PyTorch+Caffe2 with this library, such that as many NN routines as possible fall back to calling our optimized BLAS routines.
As I understand it, the default CPU-only Pytorch build leverages Eigen for generating optimized BLAS kernels. Is there some way to tear out Eigen, and replace those kernel calls with my own BLAS routines? In Caffe1 this is simple, by just changing the value of BLAS_DIR in the Makefile. In Pytorch+Caffe2 I have looked at the setup.py script, but I do not see a build flag which might be relevant.
Thanks |
st100155 | Is there a good reason for this? I would like to use general hashable objects as keys in my ParameterDict, just as I would in a normal Python dict.
Currently, the Pytorch documentation lies by claiming “ParameterDict can be indexed like a regular Python dictionary”, since the indexing must be done only with strings. |
st100156 | There is an issue 11 about a similar topic on github (ModuleDict instead of ParameterDict).
If you like, you could describe your use case there. |
st100157 | Thanks, I’m also just going with the str(int) solution for now, it’s probably not that much of a performance-loss. Still a bit ugly is all. |
st100158 | Sure, I get it, and I think it’s a valid discussion, since there seem to be a few users noting this behavior. |
st100159 | The main reason is that we need to get a string key for the registered submodule, for purpose like saving state dict. Many things are hashable, but not all of them hash to same value when you run the same code in a new process. |
st100160 | I have been making some checks on the softmax log softmax and negative log likelihood with pytorch and I have seen there are some inconsistencies. As example suppose a logit output for cifar100 database in which one of the classes has a very high logit in comparison with the rest. For this, the softmax function outputs probability 1 for that class and 0 for the rest, and for that reason we should expect a crossentropy error of 0 (in case we have a predict the true label) and 1 when not (as the crossentropy computes log(softmax[class])
However I have realized that if I perform a log_softmax operation from the nn module (where I should get a 0 where the softmax has 1 and infinity (or a real high value as I expect we avoid computing logarithm of 0) I get an inconsistency. In this case the log softmax output a 0 for the class with high probability (as expected) but returns different numbers (very negative) for the rest. This is inconsistent for two reasons:
-first: If one class has probability 1 and the rest 0 we should expect that class to have a log_softmax of 0 and the rest have an equal log probability.
-second: If we assume that the output of the nn.CrossEntropy is rounded to 1 (but we really have a 0.999999 for that class and 0.000000001 0.0000000000009 for the rest) we could not have a 0 in the log softmax output (we should expect a value near to zero. I now put some of the outputs:
LOGIT SPACE:
[-151881.58 -53958.38 382600.28 -208273.06 -682387.7
313643.06 -174599.31 314737.03 -47761.547 210986.7
-121455.92 65831.29 253933.14 107649.18 -179261.78
-9338.262 -226704.14 -197389.72 -88550.125 -225601.8
12020.757 305235.8 31988.535 -133836.75 -124994.27
124390.14 67518.836 -231378.08 311258. 92127.34
255807.5 531698. -64797.055 -234956.02 145733.86
383663.34 157211.12 410751.75 -307850.53 119320.98
-494586.7 -71108.56 -217024.64 -343667.8 182377.83
-196660.45 378547.53 -226750.02 229103.94 -76420.19
89305.65 800864.4 284610.66 -144088.16 -356096.2
87200.52 -347407.84 -244253.73 -133480.6 219508.03
-145519.03 62401.516 -79842.984 -94347.93 -371417.62
412408.22 -26637.191 120584.336 -247938.69 -58618.914
15230.674 176264.03 -91443.67 150178.55 516807.47
-144580.42 101580.055 302416.16 279529.4 -202979.7
200805.12 -81993.945 72215.734 -25153.984 -8138.0186
339307.25 -78513.84 403537. -385725.25 319416.94
-292361.7 23827.395 -386195.25 126718.26 169128.44
777514.5 473938.72 126203.87 99491.91 -239480.5 ]
OUTPUT FROM nn.SOFTMAX
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.
0. 0. 0. 0.]
OUTPUT OF LOG SOFTMAX
[[ -952745.94 -854822.75 -418264.1 -1009137.44 -1483252.
-487221.3 -975463.7 -486127.34 -848625.94 -589877.7
-922320.3 -735033.06 -546931.25 -693215.2 -980126.1
-810202.6 -1027568.5 -998254.1 -889414.5 -1026466.2
-788843.6 -495628.56 -768875.8 -934701.1 -925858.6
-676474.25 -733345.56 -1032242.44 -489606.38 -708737.
-545056.9 -269166.38 -865661.44 -1035820.4 -655130.5
-417201.03 -643653.25 -390112.62 -1108714.9 -681543.4
-1295451. -871972.94 -1017889. -1144532.2 -618486.56
-997524.8 -422316.84 -1027614.4 -571760.44 -877284.56
-711558.75 0. -516253.72 -944952.5 -1156960.5
-713663.9 -1148272.2 -1045118.1 -934345. -581356.4
-946383.4 -738462.9 -880707.4 -895212.3 -1172282.
-388456.16 -827501.56 -680280.06 -1048803. -859483.3
-785633.7 -624600.4 -892308.06 -650685.8 -284056.9
-945444.8 -699284.3 -498448.22 -521334.97 -1003844.06
-600059.25 -882858.3 -728648.6 -826018.4 -809002.4
-461557.12 -879378.25 -397327.38 -1186589.6 -481447.44
-1093226. -777037. -1187059.6 -674146.1 -631735.94
-23349.875 -326925.66 -674660.5 -701372.5 -1040344.9 ]]
As we can see the output of log softmax assigns a 0 and that is inconsistent because if probability is 0 we should have 0 for the rest and thus have the same value for the rest of the log softmax (and that is what nn.Softmax outputs). |
st100161 | I think thats the whole point why we use log softmax instead of softmax. i.e., numerical stability.
If we recall the softmax formula, It involves exponential powers. When we have large numbers (as in the array you mentioned), due to limited numerical precision of our machine, the softmax just kills the precision of numbers.
When there is exponentiation involved, log comes to the rescue to avoid the numbers blowing up. Also, In our case, softmax is mostly used in conjunction with CrossEntropyLoss function which needs log likelihood (log probability). By keeping these two reasons in mind, the researchers had come up with a clever trick to directly take log to avoid numerical precision errors.
A glimpse of softmax-cross-entropy derivation:
DeepNotes – 28 May 17
Classification and Loss Evaluation - Softmax and Cross Entropy Loss 35
Lets dig a little deep into how we convert the output of our CNN into probability - Softmax; and the loss measure to guide our optimization - Cross Entropy. |
st100162 | Hello, thanks for your reply.
That is not the point of my question. For example for computing the softmax people use a trick for numerical stability and you can get accurate softmax post activations without using the log. For example a cuda core that implements this trick is:
//Softmax->implemented for not saturating
__global__ void Softmax(float* E,float* N,float* auxE ,long int sample_dim, long int n_vals)
{
float C_value=0;
int thread_id_x = threadIdx.x + blockIdx.x*blockDim.x;
float maxCoef = E[thread_id_x*sample_dim];
float actualCoef = 0;
if (thread_id_x<n_vals)
{
///REALLY HIGH PROBABILITY OF BRANCH DIVERGENCE.
//Description: All of the threads that lie under one condition execute first (stalling the others) and then next. Assuming one clock cycle per operation we would need double time to execute one warp.
//Warping divergence: study reduction options for getting the maximum
#pragma omp parallel for
for (int cA = 1; cA < sample_dim; cA++)
if (E[thread_id_x*sample_dim+cA] > maxCoef)
maxCoef=E[thread_id_x*sample_dim+cA];
//No warping divergence as all threads execute the same
#pragma omp parallel for
for (int cA = 0; cA < sample_dim; cA++)
{
actualCoef=expf(E[thread_id_x*sample_dim+cA]-maxCoef);
auxE[thread_id_x*sample_dim+cA]=actualCoef;
C_value+=actualCoef;
}
#pragma omp parallel for
for (int cA=0; cA < sample_dim; cA++)
N[thread_id_x*sample_dim+cA]=auxE[thread_id_x*sample_dim+cA]/C_value;
}
}
And it does not uses the log for nothing. My observation is different. |
st100163 | The max trick that you have mentioned (in C code) helps when the logit values are moderately high (refer max trick 7 here). But the example numbers that you have provided in your question are quite large that even the ‘max trick’ will fail in this case (due to the exponential of large -ve numbers, for example e^(-151881.58-800864.4)).
import torch, numpy as np
x = torch.tensor([-151881.58, -53958.38, 382600.28, -208273.06, -682387.7,
313643.06, -174599.31, 314737.03, -47761.547, 210986.7,
-121455.92, 65831.29, 253933.14, 107649.18, -179261.78,
-9338.262, -226704.14, -197389.72, -88550.125, -225601.8,
12020.757, 305235.8, 31988.535, -133836.75, -124994.27,
124390.14, 67518.836, -231378.08, 311258., 92127.34,
255807.5, 531698., -64797.055, -234956.02, 145733.86,
383663.34, 157211.12, 410751.75, -307850.53, 119320.98,
-494586.7, -71108.56, -217024.64, -343667.8, 182377.83,
-196660.45, 378547.53, -226750.02, 229103.94, -76420.19,
89305.65, 800864.4, 284610.66, -144088.16, -356096.2,
87200.52, -347407.84, -244253.73, -133480.6, 219508.03,
-145519.03, 62401.516, -79842.984, -94347.93, -371417.62,
412408.22, -26637.191, 120584.336, -247938.69, -58618.914,
15230.674, 176264.03, -91443.67, 150178.55, 516807.47,
-144580.42, 101580.055, 302416.16, 279529.4, -202979.7,
200805.12, -81993.945, 72215.734, -25153.984, -8138.0186,
339307.25, -78513.84, 403537., -385725.25, 319416.94,
-292361.7, 23827.395, -386195.25, 126718.26, 169128.44,
777514.5, 473938.72, 126203.87, 99491.91, -239480.5])
# normal softmax
x.softmax(dim=0)
# softmax with max trick
(x-torch.max(x)).softmax(dim=0)
Also, this trick is already implemented in Pytorch (for example, here 24). Regardless of this, softmax for the (large) numbers in your example is impossible to be computed even in CPU (with double precision). |
st100164 | I get the runtime error “DynamicCUDAInterface::get_device called before CUDA library was loaded” upon trying to call
torch.nn.LSTM(…)
This seems to only happen when I try to do something on a machine without CUDA. However I have the cpu-version of Pytorch installed so I’m not sure why I’m getting this error. |
st100165 | For some reason this error happens when you try to pass an array instead of a scalar for (hidden_size = ). I changed that and it was fixed.
I think the error message is odd. |
st100166 | I am trying to check the count of element-wise equality between two tensors. I have narrowed my issue down to the following short example. The last line results in an “Illegal instruction” message and crashing out of Python.
import torch
torch.manual_seed(1)
x = torch.randint(0, 5, (1000, ))
x.eq(x).sum()
I am using Python 3.6.4 in iPython 6.5.0 with torch 0.4.1 on Windows 10. |
st100167 | I can’t repro this on Linux but I will open an issue on GitHub for you: https://github.com/pytorch/pytorch/issues/10483 17 |
st100168 | This sounds like the CPU capability dispatch code might not be working properly on Windows. Do you know what model CPU you have? |
st100169 | Thanks all.
I am using a VMware virtual machine - according to the system information within the virtual machine, I have an Intel Xeon CPU E5-2680. |
st100170 | I’m curious if the following works (in a new iPython process):
import os
os.environ['ATEN_DISABLE_AVX2'] = '1'
import torch
torch.manual_seed(1)
x = torch.randint(0, 5, (1000, ))
x.eq(x).sum() |
st100171 | Still got the same error.
By the way, it doesn’t seem to matter what x is. I used a random number generator, but you could replace x with x = torch.ones(5) or x = torch.ones(5, 5) and still get the same error. |
st100172 | I think the issue is that the sum() call is running a kernel that uses AVX2 instructions, but the CPU doesn’t support AVX2 instructions (only AVX).
There are two likely causes:
The CPU capability detection code isn’t working on Windows (or maybe the VM?) and incorrectly thinks the CPU supports AVX2 instructions
The library linking behaves differently on Windows and is causing the AVX2 kernel to be run when the AVX kernel is called. |
st100173 | I have the same crash problem in caffe2.dll while calling tensor.sum().
My environment is win7 + python 3.6 + pytorch 0.4.1.
CPU is Intel Pentium which does not support AVX or AVX2 instruction set. |
st100174 | We are compiling caffe2.dll with AVX and AVX2 instruction set. So if your CPU doesn’t support it, you may have to build it yourself. |
st100175 | I have to give multiple image inputs to the following code. I have my input images in a folder. How can I give it as an input one by one and save the output in every single iteration?
This is my code :
from __future__ import print_function
import matplotlib.pyplot as plt
%matplotlib inline
import argparse
import os
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
import numpy as np
from models import *
import torch
import torch.optim
from skimage.measure import compare_psnr
from models.downsampler import Downsampler
from utils.sr_utils import *
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark =True
dtype = torch.cuda.FloatTensor
imsize = -1
factor = 4 # 8
enforse_div32 = 'CROP' # we usually need the dimensions to be divisible by a power of two (32 in this case)
PLOT = True
# To produce images from the paper we took *_GT.png images from LapSRN viewer for corresponding factor,
# e.g. x4/zebra_GT.png for factor=4, and x8/zebra_GT.png for factor=8
path_to_image = '/home/smitha/Documents/Falcon.png'
imgs = load_LR_HR_imgs_sr(path_to_image , imsize, factor, enforse_div32)
imgs['bicubic_np'], imgs['sharp_np'], imgs['nearest_np'] = get_baselines(imgs['LR_pil'], imgs['HR_pil'])
if PLOT:
plot_image_grid([imgs['HR_np'], imgs['bicubic_np'], imgs['sharp_np'], imgs['nearest_np']], 4,12);
print ('PSNR bicubic: %.4f PSNR nearest: %.4f' % (
compare_psnr(imgs['HR_np'], imgs['bicubic_np']),
compare_psnr(imgs['HR_np'], imgs['nearest_np'])))
input_depth = 32
INPUT = 'noise'
pad = 'reflection'
OPT_OVER = 'net'
KERNEL_TYPE='lanczos2'
LR = 0.01
tv_weight = 0.0
OPTIMIZER = 'adam'
if factor == 4:
num_iter = 2000
reg_noise_std = 0.03
elif factor == 8:
num_iter = 4000
reg_noise_std = 0.05
else:
assert False, 'We did not experiment with other factors'
net_input = get_noise(input_depth, INPUT, (imgs['HR_pil'].size[1], imgs['HR_pil'].size[0])).type(dtype).detach()
NET_TYPE = 'skip' # UNet, ResNet
net = get_net(input_depth, 'skip', pad,
skip_n33d=128,
skip_n33u=128,
skip_n11=4,
num_scales=5,
upsample_mode='bilinear').type(dtype)
# Losses
mse = torch.nn.MSELoss().type(dtype)
img_LR_var = np_to_torch(imgs['LR_np']).type(dtype)
downsampler = Downsampler(n_planes=3, factor=factor, kernel_type=KERNEL_TYPE, phase=0.5, preserve_size=True).type(dtype)
def closure():
global i, net_input
if reg_noise_std > 0:
net_input = net_input_saved + (noise.normal_() * reg_noise_std)
out_HR = net(net_input)
out_LR = downsampler(out_HR)
total_loss = mse(out_LR, img_LR_var)
if tv_weight > 0:
total_loss += tv_weight * tv_loss(out_HR)
total_loss.backward()
# Log
psnr_LR = compare_psnr(imgs['LR_np'], torch_to_np(out_LR))
psnr_HR = compare_psnr(imgs['HR_np'], torch_to_np(out_HR))
print ('Iteration %05d PSNR_LR %.3f PSNR_HR %.3f' % (i, psnr_LR, psnr_HR), '\r', end='')
# History
psnr_history.append([psnr_LR, psnr_HR])
if PLOT and i % 100 == 0:
out_HR_np = torch_to_np(out_HR)
plot_image_grid([imgs['HR_np'], imgs['bicubic_np'], np.clip(out_HR_np, 0, 1)], factor=13, nrow=3)
i += 1
return total_loss
psnr_history = []
net_input_saved = net_input.detach().clone()
noise = net_input.detach().clone()
i = 0
p = get_params(OPT_OVER, net, net_input)
optimize(OPTIMIZER, p, closure, LR, num_iter)
out_HR_np = np.clip(torch_to_np(net(net_input)), 0, 1)
result_deep_prior = put_in_center(out_HR_np, imgs['orig_np'].shape[1:])
# For the paper we acually took `_bicubic.png` files from LapSRN viewer and used `result_deep_prior` as our result
plot_image_grid([imgs['HR_np'],
imgs['bicubic_np'],
out_HR_np], factor=4, nrow=1);
And for saving the images I am using the following code.
tensors_to_plot1 = torch.from_numpy(imgs[‘bicubic_np’]) tensors_to_plot2 = torch.from_numpy(imgs[‘HR_np’]) tensors_to_plot3 = torch.from_numpy(out_HR_np) torchvision.utils.save_image(tensors_to_plot1, ‘bicubic9.tif’) torchvision.utils.save_image(tensors_to_plot2, ‘HR9.tif’) torchvision.utils.save_image(tensors_to_plot3, ‘out9.tif’) |
st100176 | It is really difficult to explain my situation. but I will try to do my best:)
I have (128, 1) tensor which includes 128 rows and each row has 0 or 1 value.
And I have another tensor (128, 2). using previous tensor, I want to choose each rows’ value and transformed second tensor to a new tensor (128,1)
how can I achieve this?? |
st100177 | I think gather would work for you:
x = torch.randn(128, 2)
index = torch.empty(128, 1, dtype=torch.long).random_(2)
x.gather(1, index) |
st100178 | Hello,
I witnessed a strange behavior recently using F.mse_loss.
Here’s the test I ran:
import torch
import torch.nn as nn
import torch.nn.functional as F
layer = nn.Linear(1,3)
x = torch.rand(1,1)
label = torch.rand(1,3)
out = layer(x)
print('Input: {}\nLabel: {}\nResult: {}'.format(x, label, out))
loss_1 = F.mse_loss(out, label)
loss_2 = F.mse_loss(label, out)
print('Loss1: {}\nLoss2: {}'.format(loss_1, loss_2))
Output:
Input: tensor([[0.6389]])
Label: tensor([[0.9091, 0.5892, 0.8812]])
Result: tensor([[ 0.2329, -0.2419, -0.5444]], grad_fn=<ThAddmmBackward>)
Loss1: 1.060153603553772
Loss2: 3.1804609298706055
Am I missing something here ?
Thanks ! |
st100179 | Hi,
I can’t reproduce that, I get exact same values for both on my machine.
Which version of pytorch are you using?
How did you installed pytorch?
I think I remember issues where mm operations were not behaving properly for some wrongly installed/incompatible blas libraries. |
st100180 | How can the MFCC features extracted from a speech signal be used to perform word/sentence boundary detection with pytorch?
Also can Connectionist Temporal classification cost be used to achieve the same?? |
st100181 | All the time I try to find out some api or usage on pytorch documentation through web, it is really slow and dismotivated me. Is there any way to get PDF so that I find what I want easily in my local computer? |
st100182 | Easiest way -
open the pytorch documentation using Chrome… Hit Ctrl+P it will open a print page for you. Save it. |
st100183 | I read this in tensor.view() function documentation.
Could you take this example?
I tried but I got error
z
Out[12]:
tensor([[ 0.9739, 0.6249],
[ 1.6599, -1.1855],
[ 1.4894, -1.7739],
[-0.8980, 1.5969],
[-0.4555, 0.7884],
[-0.3798, -0.3718]])
z = x.view(1, 2)
Traceback (most recent call last):
File “D:\temp\Python35\lib\site-packages\IPython\core\interactiveshell.py”, line 2961, in run_code
exec(code_obj, self.user_global_ns, self.user_ns)
File “”, line 1, in
z = x.view(1, 2)
RuntimeError: invalid argument 2: size ‘[1 x 2]’ is invalid for input with 12 elements at …\aten\src\TH\THStorage.cpp:84 |
st100184 | Hi,
What is x is that example?
Anyway view is like reshape in numpy (with additional considerations I am not familiar with) but if you call:
x.view(shape)
then their should be as many elements in x as in a tensor with size()==shape. (to go from a 3D tensor of size C,H,W to a 2D tensor of size()==shape then shape[0] x shape[1] == CHW) |
st100185 | x is just a tensor. I understood same as you said about torch.Tensor.view() .
my question is from here 1.
“The returned tensor shares the same data and must have the same number of elements, but may have a different size” |
st100186 | As @el_samou_samou explained, the number of elements stays the same, while the size may differ.
Here is a small example:
x = torch.randn(2, 2, 2, 2)
print(x.size())
x = x.view(-1, 2, 1)
print(x.size())
x = x.view(-1)
print(x.size())
While the underlying data of x stays the same, its size changes after each view call.
Nonetheless the number of elements is the same.
A call like x.view(2) won’t work, as we have 16 elements. |
st100187 | Now I understand! I understood it as total size could be changed. Thanks for example |
st100188 | Hello everyone,
I’m getting some problem that my memory consumption on dashboard looks so weird…
At runtime each memory consumption is about 8G, when training on single GPUs, single machine; And each memory consumption has increased from 8G to 32G, when I use multiple machines.
But each consumption has decreased when I use multiple machines and don’t use DistributedSampler.
I don’t know why it can make a big difference? Are there are any reasons can cause it and how I can fix it?
My code is following:
corpus_dataset = CorpusDataset(h5py_path, self.word2Vec, self.args.maxL_input, self.args.maxL_output)
train_sampler = None
if self.args.distributed:
dist.init_process_group(backend=self.args.distBackend, init_method=self.args.distUrl,
world_size=self.args.worldSize, rank=self.args.rank)
train_sampler = distUtils.DistributedSampler(corpus_dataset, self.args.worldSize, self.args.rank)
custom_loader = Data.DataLoader(
dataset=corpus_dataset,
batch_size=self.args.batchSize,
shuffle=(train_sampler is None),
drop_last=(train_sampler is not None),
num_workers=1,
collate_fn=collate_fn,
sampler=train_sampler
)
for epoch in range(self.args.numEpochs):
for posts, p_lens, responses, r_lens, labels in custom_loader:
self.optimizer.zero_grad()
score = self.dual_encoder(posts, p_lens, responses, r_lens)
loss = self.loss_fc(score, labels)
loss.backward()
if self.args.distributed:
self.average_gradients(self.dual_encoder)
self.optimizer.step()
pass |
st100189 | I get something. I rewrote line 41 of DistributedSampler.class 5
indices = list(torch.randperm(len(self.dataset), generator=g))
as follow:
indices = torch.randperm(len(self.dataset), generator=g).numpy().tolist()
It works for me and the memory consumption is maintained at a certain level. |
st100190 | stack trace
self.lstm = nn.LSTM(input_size = n_features, hidden_size=hidden_size, batch_first=True)
File “/pytorch4/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 409, in init
super(LSTM, self).init(‘LSTM’, *args, **kwargs)
File “/pytorch4/lib/python3.6/site-packages/torch/nn/modules/rnn.py”, line 52, in init
w_ih = Parameter(torch.Tensor(gate_size, layer_input_size))
RuntimeError: CUDA error (10): invalid device ordinal
I’m confused as to how CUDA is associated even though I have not issued any CUDA commands. I am merely calling an LSTM init?
Here is some cuda output from my system
import torch
torch.cuda.current_device()
0
torch.cuda.device(0)
<torch.cuda.device object at 0x2afdfbe226a0>
torch.cuda.device_count()
1
torch.cuda.get_device_name(0)
‘Tesla K40m’ |
st100191 | Hi all,
I am trying to train vgg13 (pre train model) to classify the images. I put my classified
classifier = nn.Sequential(OrderedDict([
(‘fc1’, nn.Linear(25088, 5000)),
(‘relu’, nn.ReLU()),
(‘dropout’, nn.Dropout(0.2)),
(‘fc2’, nn.Linear(5000, 102)),
(‘output’, nn.LogSoftmax(dim=1))
]))
when I start training this model
Epoch: 1/10… Training Loss: 0.0608 Test Loss:1.786 Test Accurracy:0.564
Epoch: 1/10… Training Loss: 0.0408 Test Loss:1.032 Test Accurracy:0.730
As you can see the Training Loss is too low and I am not able to reason why. Any clue? |
st100192 | Could you explain a bit more, what you are experiencing?
Is your training slow using a CNN?
How did you time the training? |
st100193 | It is like the time for training a CNN model in multiple GPUs is roughly the same compared to training using one single GPU. But for LSTMs there will be a difference. Is that normal? Thanks! |
st100194 | The time for each iteration or epoch?
In the former case this would be a perfectly linear speedup.
In the latter case, the bottleneck might be your data loading, e.g. loading from a HDD instead of a SSD.
Are you using the same DataLoader for the CNN and LSTM run? |
st100195 | I checked the time when I called something like for i in data_loader and that is pretty fast. The majority of time was spent at the step result = model(data) and optimizer.step() so I am not sure what happened. It does not seem to be a data loader issue.
I track time for 50 steps so I think it is close to your later case. |
st100196 | So the 50 steps using multiple GPUs take the same time as 50 steps using a single GPU, e.g. 1 minute?
Assuming you’ve scaled up your batch size for DataParallel this would be perfectly fine, as your wall time for an epoch will be divided by your number of GPUs now. |
st100197 | yes, roughly.
Should I scale the batch size up? I am wondering a too-large batch size leads to bad performance (say 128 -> 1024 with 8 GPUs). |
st100198 | Your data will be split across the devices by chunking in the batch dimension.
If your single model worked good for a batch size of e.g. 128, you could use a batch size of 128*4 for 4 GPUs.
Each model will get a batch of 128 samples, so that the performance should not change that much. |
st100199 | Okay. Sorry I am still a little bit confused here. You said that each model will get 128 samples, however, at the backward step, how will the training work? Will that be something like taking the sum from each GPU? |