id
stringlengths 3
8
| text
stringlengths 1
115k
|
---|---|
st100300 | Solved by ptrblck in post #2
By default only Parameters will be recognized and pushed to the according device.
Try to wrap your random tensor into torch.nn.Parameter(torch.randn(1), requires_grad=False) and it should work.
Alternatively you could also store it in a nn.ModuleList. |
st100301 | By default only Parameters will be recognized and pushed to the according device.
Try to wrap your random tensor into torch.nn.Parameter(torch.randn(1), requires_grad=False) and it should work.
Alternatively you could also store it in a nn.ModuleList. |
st100302 | import torch
import time
from torchvision.models import vgg16
size = 512
num = 1000
net = vgg16().features.cuda()
print(net)
x = torch.zeros((1, 3, size, size)).cuda()
cost = 0
for i in range(num):
t0 = time.time()
y = net(x)
t1 = time.time()
cost += t1 - t0
cost = cost / num * 1000
print("input size is {}, test {} times, average spend time {:.2f}ms".format(size, num, cost))
My GPU is GTX1060
when set num=10, i got 1.58ms
when set num=1000 i got 33.64ms
how to understand? |
st100303 | As CUDA calls are asynchronously, you would have to synchronize before starting and stopping the timer:
torch.cuda.synchronize()
t0 = time.time()
y = net(x)
torch.cuda.synchronize()
t1 = time.time()
Could you add it and try it again? |
st100304 | When I use these functions on Cuda tensors I have 0% GPU usage. Are math functions computed on the CPU regardless of the fact that tensors are on the GPU? |
st100305 | I believe some parts run on the GPU and some parts run on the CPU. Are you really getting 0% GPU usage? |
st100306 | Yes, I’m monitoring it with nvidia-smi --loop=1 so it is not very accurate. I get about 7% in the first second, then it goes to zero. |
st100307 | A = torch.potrs(B, torch.potrf©)
Now, I perform the above calculation on pytorch, and both B and C are in GPU. Does pytorch perform this under GPU or CPU? If it is performed on GPU, in my tests, this command takes up a lot of CPU, why?
Thank you for your attention. |
st100308 | I was trying to use 3 gpus for training, however the train time changes very negligibly. When training on one gpu I set the batch size to 200, when training with 3 I set it to 1000.
The gpu consumption is distributed across the 3 gpus, however the training time is still the same for one epoch.
What are the possible causes for the same? |
st100309 | you could time the execution speed of dataloading, model execution etc to see what’s the real bottleneck. Are you sure it’s the model, but not disk io? |
st100310 | If you are using dataloader for your training loop, you can measure dataloading time simply like below.
loader_time, st = 0, time.time()
for i, data in enumerate(loader):
loader_time += time.time() - st
# sth for training
# ...
st = time.time()
# end of the loop
Then you can measure the proportion of dataloading time from whole training.
Or the better way is to do the profiling like this.
Sagivtech
Optimizing PyTorch training code 5
Ben Levy and Jacob Gildenblat, SagivTech
PyTorch is an incredible Deep Learning Python framework. It makes prototyping and debugging deep learning algorithms easier, and has great support for multi gpu training.
However, as always with... |
st100311 | I would like to scan a dataset with a model and collect the output (say, 10 class probabilities for instance). After scanning, I want to know which output corresponds to which data point. The best way I can think of is to make my __gititem__ (under DatasetMaker class) to return (index, data, label) (not just (data, label)). This way, it doesn’t matter if my datasetLoader shuffles or not (right?). Then, I’ll collect model outputs in a dataframe.
Is there a better way to scan and trace datasets? |
st100312 | miladiouss:
This way, it doesn’t matter if my datasetLoader shuffles or not (right?).
Right. The index argument given to the __getitem__ method is not affected by shuffle option of dataloader.
For example, if shuffled order is [2, 0, 1], then __getitem(2)__, __getitem(0)__, __getitem(1)__ will be called in order.
If you turn off the shuffle option, it’s guranteed that order of outputs is matched to the order of input.
So you can do something like this.
cumidx, results = 0, {}
for i, data in enumerate(loader):
outputs = model(data)
for j in range(outputs.size(0)):
results[j + cumidx] = outputs[j]
cumidx += outputs.size(0)
which doesn’t require Dataset class to return index of corresponding data. |
st100313 | Greetings,
I am trying to make my own model with PyTorch.
It is based on LSTM and convolutions.
It is incomplete but works well.
However, my model is getting slower forward by forward.
Additionally, I apply .backward(retain_graph=True) according to the following instruction.
“Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.”
class DLSTMCell(nn.Module):
def __init__(self, input_size, hidden_size, x_kernel_size, h_kernel_size, residual_length_, stride=1):
super(DLSTMCell, self).__init__()
pad_x = math.floor(x_kernel_size/2)
pad_h = math.floor(h_kernel_size/2)
self.hidden_size = hidden_size
self.stride = stride
self.residual_length = residual_length_
self.output_size = int(hidden_size * residual_length_)
# input gate
self.conv_i_x = nn.Parameter(torch.FloatTensor(input_size, self.hidden_size)).cuda()
self.conv_i_x_bias = nn.Parameter(torch.FloatTensor(self.hidden_size)).cuda()
self.batchnorm_i_x = nn.BatchNorm1d(hidden_size)
self.conv_i_h = nn.Conv1d(self.output_size, hidden_size, h_kernel_size, stride=1, padding=pad_h)
self.batchnorm_i_h = nn.BatchNorm1d(hidden_size)
# forget gate
self.conv_f_x = nn.Parameter(torch.FloatTensor(input_size, self.hidden_size)).cuda()
self.conv_f_x_bias = nn.Parameter(torch.FloatTensor(self.hidden_size)).cuda()
self.batchnorm_f_x = nn.BatchNorm1d(hidden_size)
self.conv_f_h = nn.Conv1d(self.output_size, hidden_size, h_kernel_size, stride=1, padding=pad_h)
self.batchnorm_f_h = nn.BatchNorm1d(hidden_size)
# cell gate
self.conv_c_x = nn.Parameter(torch.FloatTensor(input_size, self.hidden_size)).cuda()
self.conv_c_x_bias = nn.Parameter(torch.FloatTensor(self.hidden_size)).cuda()
self.batchnorm_c_x = nn.BatchNorm1d(hidden_size)
self.conv_c_h = nn.Conv1d(self.output_size, hidden_size, h_kernel_size, stride=1, padding=pad_h)
self.batchnorm_c_h = nn.BatchNorm1d(hidden_size)
# output gate
self.conv_o_x = nn.Parameter(torch.FloatTensor(input_size, self.hidden_size)).cuda()
self.conv_o_x_bias = nn.Parameter(torch.FloatTensor(self.hidden_size)).cuda()
self.batchnorm_o_x = nn.BatchNorm1d(hidden_size)
self.conv_o_h = nn.Conv1d(self.output_size, hidden_size, h_kernel_size, stride=1, padding=pad_h)
self.batchnorm_o_h = nn.BatchNorm1d(hidden_size)
self.last_cell = None
def reset_state(self):
self.last_cell = None
def reset_h_list(self):
self.register_buffer('h_list', torch.zeros(8, self.residual_length, self.hidden_size))
self.h_list = self.h_list.cuda()
def forward(self, x):
batch_size = x.size(0)
# first sequence
if self.last_cell is None:
self.last_cell = Variable(torch.zeros(batch_size, self.hidden_size))
self.conv_i_x_bias = self.conv_i_x_bias.unsqueeze(0).expand(batch_size, *self.conv_i_x_bias.size())
self.conv_f_x_bias = self.conv_f_x_bias.unsqueeze(0).expand(batch_size, *self.conv_f_x_bias.size())
self.conv_c_x_bias = self.conv_c_x_bias.unsqueeze(0).expand(batch_size, *self.conv_c_x_bias.size())
self.conv_o_x_bias = self.conv_o_x_bias.unsqueeze(0).expand(batch_size, *self.conv_o_x_bias.size())
self.last_cell = self.last_cell.cuda()
self.h_list = self.h_list.cuda()
h = self.h_list[:, -self.residual_length:, :].contiguous().view(batch_size, self.output_size, -1)
# input gate
input_h = self.batchnorm_i_h(self.conv_i_h(h))
input_x = self.batchnorm_i_x(torch.mm(x, self.conv_i_x) + self.conv_i_x_bias)
input_h = torch.squeeze(input_h)
input_gate = F.sigmoid(input_x + input_h)
# forget gate
forget_x = self.batchnorm_f_x(torch.mm(x, self.conv_f_x) + self.conv_f_x_bias)
forget_h = self.batchnorm_f_h(self.conv_f_h(h))
forget_h = torch.squeeze(forget_h)
forget_gate = F.sigmoid(forget_x + forget_h)
# forget gate
cell_x = self.batchnorm_c_x(torch.mm(x, self.conv_c_x) + self.conv_c_x_bias)
cell_h = self.batchnorm_c_h(self.conv_c_h(h))
cell_h = torch.squeeze(cell_h)
cell_intermediate = F.tanh(cell_x + cell_h) # g
cell_gate = (forget_gate * self.last_cell) + (input_gate * cell_intermediate)
# output gate
output_x = self.batchnorm_o_x(torch.mm(x, self.conv_o_x) + self.conv_o_x_bias)
output_h = self.batchnorm_o_h(self.conv_o_h(h))
output_h = torch.squeeze(output_h)
output_gate = F.sigmoid(output_x + output_h)
next_h = output_gate * F.tanh(cell_gate)
self.last_cell = cell_gate
next_h = torch.unsqueeze(next_h, dim=1)
self.h_list = torch.cat((self.h_list, next_h), dim=1)
#print("self.h_list shape: {}".format(self.h_list.shape))
return next_h
class LSTM(nn.Module):
"""A module that runs multiple steps of LSTM."""
def __init__(self, cell_class, input_size, hidden_size, x_kernel_size, h_kernel_size, num_layers=1, residual_length=5,
use_bias=True, batch_first=True, dropout=0, **kwargs):
super(LSTM, self).__init__()
self.cell_class = cell_class
self.input_size = input_size
self.hidden_size = hidden_size
self.x_kernel_size = x_kernel_size
self.h_kernel_size = h_kernel_size
self.num_layers = num_layers
self.use_bias = use_bias
self.batch_first = batch_first
self.dropout = dropout
self.residual_length = residual_length
self.output_size = hidden_size * residual_length
for layer in range(num_layers):
layer_input_size = input_size if layer == 0 else hidden_size
cell = cell_class(input_size=layer_input_size, hidden_size=hidden_size, x_kernel_size=x_kernel_size, h_kernel_size=h_kernel_size, residual_length_=residual_length, **kwargs)
setattr(self, 'cell_{}'.format(layer), cell)
self.dropout_layer = nn.Dropout(dropout)
self.reset_parameters()
def get_cell(self, layer):
return getattr(self, 'cell_{}'.format(layer))
def reset_parameters(self):
for layer in range(self.num_layers):
cell = self.get_cell(layer)
@staticmethod
def _forward_rnn(cell, input_, energy_, length): # energy_: the amount of information of each sequence (attention)
assert input_.size(0) == energy_.size(0), "Sequence length of input and energy must be the same"
max_time = input_.size(0)
output = []
for time in range(max_time):
h_next = cell(x=input_[time])
output.append(h_next)
output = torch.stack(output, 0)
#print("output shape: {}".format(output.shape))
return output
def forward(self, input_, energy_, length=None, hx=None):
if self.batch_first:
input_ = input_.transpose(0, 1)
energy_ = energy_.transpose(0, 1)
max_time, batch_size, _ = input_.size()
if length is None:
length = Variable(torch.LongTensor([max_time] * batch_size))
if input_.is_cuda:
device = input_.get_device()
length = length.cuda(device)
layer_output = None
for layer in range(self.num_layers):
cell = self.get_cell(layer)
cell.reset_h_list()
layer_output = LSTM._forward_rnn(cell=cell, input_=input_, energy_=energy_, length=length)
layer_output = layer_output[-self.residual_length:, :, :].transpose(0, 1)
layer_output = layer_output.contiguous().view(batch_size, -1)
return layer_output |
st100314 | I change my question.
Why the model trying to backward through the graph more than 2 times?
Thank you |
st100315 | Hi everyone,
I am trying to use the spectrale_norm function for a GAN regularization. I am on pytorch 0.4.0 so I just copy pasted the source code you can find here 18. I am calling the function spectral_norm on transposed 2d convolutions as well as 2d convolutions but I get the following error while training (when I call loss.backward() ):
RuntimeError: Tensor: invalid storage offset at /pytorch/aten/src/THC/generic/THCTensor.c:759
By looking at previous answers I changed (line 29):
weight_mat = weight_mat.reshape(height, -1)
to:
weight_mat = weight_mat.view(height, -1)
and then get the error:
RuntimeError: invalid argument 2: View size is not compatible with input tensor's size and stride (at least one dimension spans across two contiguous subspaces). Call .contiguous() before .view(). at /pytorch/aten/src/THC/generic/THCTensor.c:276
So as suggested I now changed line 29 to:
weight_mat = weight_mat.contiguous().view(height, -1)
And now things works. I just wanted to be sure my function is still performing correctly the spectral normalization and if yes, the original line 29 might be a mistake. Having explanations on that point could be great.
Thank you |
st100316 | reshape is designed to replace contiguous+view. So the bug is not at that line. If you could give us a gdb trace or a small repro example for the original error, that will be great. |
st100317 | Hi Simon,
Yes sure sorry I should have done that earlier. It s just a GAN generator from 16x16 input noise to 256x256 output image, I just train it with L1Loss just to show you the issue (sorry if the code is messy I am a beginner). There you go:
from torch import optim
import torch.nn as nn
from tqdm import tqdm
from torch.utils.data import DataLoader
from torch.utils import data
import numpy as np
class Generator(nn.Module):
def __init__(self, image_size=64, z_size=16, conv_dim=64):
super().__init__()
self.n_up = int(np.log2(image_size/z_size))
curr_channel = 1
out_channels = conv_dim
for i in range(self.n_up):
self.__dict__["_modules"]["upconv"+str(i+1)] = nn.Sequential(
spectral_norm(
nn.ConvTranspose2d(curr_channel,
out_channels,
4,
2,
1)
),
nn.BatchNorm2d(out_channels),
nn.ReLU(inplace=False),
)
curr_channel = out_channels
out_channels = curr_channel//2
if i>=(self.n_up-3):
self.__dict__["_modules"]["conv"+str(i+1)] = nn.Sequential(
nn.Conv2d(curr_channel, 1, 1, 1, 0),
nn.Tanh(),
)
def forward(self, z):
res = []
for i in range(self.n_up):
z = self.__dict__["_modules"]["upconv"+str(i+1)](z)
if i==(self.n_up-1):
res.append(self.__dict__["_modules"]["conv"+str(i+1)](z))
return res[-1]
class DatasetTest(data.Dataset):
def __init__(self, data, target):
self.data = data
self.target = target
def __len__(self):
return self.data.size()[0]
def __getitem__(self, index):
return {"data": self.data[index], "target": self.target[index]}
inp = torch.rand(100, 1, 16, 16)
tar = torch.rand(100, 1, 256, 256)
data = DatasetTest(inp, tar)
loader = DataLoader(dataset=data, batch_size=10, shuffle=True)
gen = Generator(image_size=256, z_size=16, conv_dim=64).cuda()
optimizer = optim.Adam(filter(lambda p: p.requires_grad, gen.parameters()), lr=0.0004)
gen = nn.DataParallel(gen).cuda()
crit = nn.L1Loss()
for batch in tqdm(loader):
inp = batch["data"].cuda()
tar = batch["target"].cuda()
out = gen(inp)
loss = crit(out, tar.detach())
loss.backward()
optimizer.step()
As I said earlier, the spectral normalization is just a copy pasted of the available version of the source code without any change so if you are on 0.4.1 please add:
from torch.nn.utils.spectral_norm import *
at the beginning of this code.
After investigation, it happens that if I don t use data parallelisation, everything works. But if I do then I get the following error:
File "brain_anomaly_detection/models/spectral_norm.py", line 185, in <module>
loss.backward()
File "/home/joutars/anaconda2/envs/py36/lib/python3.6/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/joutars/anaconda2/envs/py36/lib/python3.6/site-packages/torch/autograd/__init__.py", line 89, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Tensor: invalid storage offset at /pytorch/aten/src/THC/generic/THCTensor.c:759
Any idea?
Thanks |
st100318 | Looks like nonzero on a 2d tensor will return the coordinates of the non-zero elements? Is there a function which can take these coordinates, and a list of values, and recreate the original tensor? |
st100319 | Solved by ptrblck in post #2
You could use the following code:
x = torch.empty(5, 5).random_(3)
idx = x.nonzero()
y = torch.zeros(5, 5)
y[idx[:,0], idx[:,1]] = x[idx[:, 0], idx[:, 1]]
print((x==y).all())
I’m not sure, if this issue was already solved, as x[x.nonzero()] seems not to be supported at the moment. |
st100320 | You could use the following code:
x = torch.empty(5, 5).random_(3)
idx = x.nonzero()
y = torch.zeros(5, 5)
y[idx[:,0], idx[:,1]] = x[idx[:, 0], idx[:, 1]]
print((x==y).all())
I’m not sure, if this issue 37 was already solved, as x[x.nonzero()] seems not to be supported at the moment. |
st100321 | When we’re building new models we often write alot of tests iteratively, but I rarely see this formalized.
My current tests are built by inheriting from tests.common (link 34 by separately installing a python package into my environment that only contains whatever is in common. (I made a feature-request (issue #5045) 15 that didn’t receive any love yet)
setup.py:
import sys
from setuptools import setup
setup(
name = "torchtestcommon", # what you want to call the archive/egg
version = '0.4.0a0',
packages=["torchtest"], # top-level python modules you can import like
dependency_links = [],
install_requires=[],
package_data = {},
author="Pytorch contributors",
author_email = "",
description = "Copy-paste of pytorch/test/common.py into ./torchtest/",
)
After cd torchtestcommon; pip install -e . I can write stuff like
from torchtest.common import TestCase
testing = TestCase()
testing.assertAlmostEqual(torch.ones(1),torch.ones(1))
Or
from thisfancyneuralnetworkname import MyModel
from torchtest.common import TestCase
import torch
input_ex = torch.ones([100,100])
output_ex = torch.ones([100,100])
class TestMyModel(TestCase):
def test_forward(self):
model = MyModel()
y = model(input_ex)
self.assertEqual(y.size(), output_ex.size())
self.assertAlmostEqual(y, output_ex, places=3)
.....
And then run pytest .
How do you do it? Or any link to best practices/example repositories? |
st100322 | Just an update on this, there’s now torch.testing which you can use similarly to numpy.testing;
In [8]: x = torch.ones(1)
In [9]: y = x+1
In [10]: torch.testing.assert_allclose(x,y)
---------------------------------------------------------------------------
AssertionError Traceback (most recent call last)
<ipython-input-10-3b698e9eddbf> in <module>()
----> 1 torch.testing.assert_allclose(x,y)
/usr/local/anaconda3/envs/pytorch/lib/python3.6/site-packages/torch/testing/__init__.py in assert_allclose(actual, expected, rtol, atol, equal_nan)
55 raise AssertionError(msg.format(
56 rtol, atol, list(index), actual[index].item(), expected[index].item(),
---> 57 count - 1, 100 * count / actual.numel()))
58
59
AssertionError: Not within tolerance rtol=0.0001 atol=1e-05 at input[0] (1.0 vs. 2.0) and 0 other locations (100.00%) |
st100323 | can i set step size like step_size= [30,100,500] to specify in what epochs change the lr? |
st100324 | Solved by ptrblck in post #2
For multiple milestones you should use the lr_scheduler.MultiStepLR. |
st100325 | I am a person unfamiliar with the pytorch and caffe2 frameworks. I want to use pytorch serving like tf-serving. After some searching, I found that many articles mentioned that the pytorch model was converted to the caffe2 model by the onnx tool provided by facebook. I tried to convert the pytorch model to the caffe2 model with onnx. But I don’t know how to start the service. I have read the official documentation of caffe2 and pytorch, and google has a lot of related content, but I didn’t get the content I want to know from it, such as how to start the end-to-end caffe2 serving. Does anyone know how to start the caffe2 serving like tf-serving? |
st100326 | Here 150 is a good explanation how to serve the Caffe2 model on AWS Lambda.
If that’s not what you are looking for, could you explain your current use case a bit more?
Do you want to deploy your model locally or in the web/cloud? |
st100327 | I did a quick search for a neural network analysis tool for pytorch, but did not find much.
To start of I want to compare the importance of input variables for the test result. Later on I might want to do more, but this is what I want to start with.
Is it a function, feature, or tool - with Pytorch thank can help me find the importance of input variables of the NN?
Or is it better to use PCA, Boosted Tree algorithm, etc. - before hand - to find the importance of variables?
Thank you! |
st100328 | It’s not that easy to answer the question as there are many techniques to visualize the importance of the input regions and weights.
This Distill Pub 433 publication is an awesome explanation of some concepts. |
st100329 | Hi all,
I have a multi-layered LSTMs, and I expected a faster training if I use a packed sequence instead of padded tensors of the longest sequence length. However, a comparison shows the padded input is slightly faster than the packed one. In my understanding, the packed input runs the loops in the layers lesser. doesn’t it?
Thanks! |
st100330 | Do you have the code to run the experiments? I would be quite interested since I too am using packed sequence thing. |
st100331 | Here is a test code:
import time
import numpy as np
import torch
import torch.nn as nn
from torch.nn.utils.rnn import pack_sequence, pad_sequence
device = torch.device("cuda")
batch_size = 32
input_size = 100
hidden_size = 512
seq_len_range = (50, 200)
epoch = 10
rnn = nn.LSTM(input_size, hidden_size, num_layers=4, bias=True)
rnn.to(device)
inputs = list()
for i in range(epoch):
seq = [torch.rand((np.random.randint(*seq_len_range), input_size)) for i in range(batch_size)]
seq = sorted(seq, key=lambda x: x.size(0), reverse=True)
inputs.append(seq)
# for padded input
start = time.time()
for i in inputs:
x = pad_sequence(i).to(device)
y = rnn(x)
end = time.time()
print(f"elapsed time for padded input: {end - start} secs")
# for packed input
start = time.time()
for i in inputs:
x = pack_sequence(i).to(device)
y = rnn(x)
end = time.time()
print(f"elapsed time for packed input: {end - start} secs")
The result was:
elapsed time for padded input: 0.6328160762786865 secs
elapsed time for packed input: 0.6410703659057617 secs
Interestingly, in cpu mode, the packed input is faster than the padded one:
elapsed time for padded input: 27.869688272476196 secs
elapsed time for packed input: 20.38231635093689 secs |
st100332 | On well formatted inputs (without nan) linear transformation is returning NaN:
vec_tensor = torch.from_numpy(vec)
# check if input is nan
if np.isnan(np.sum(vec_tensor.cpu().numpy())):
print("some values from input are nan")
x = vec_tensor[:,0:ZONE_SIZE*2]
x = x.view(-1, ZONE_SIZE * 2)
x_mean = torch.mean(x, dim=1).view(-1,1)
x_std = torch.std(x, dim=1).view(-1,1)
x = (x -x_mean) / x_std
o = net.linear1(x)
print (o)
And the result is:
tensor([[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
...,
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.]])
Repo with the faulty network params and vector is here: https://github.com/ssainz/pytorch_bug 20
Versions are:
backports-abc (0.5)
backports.functools-lru-cache (1.5)
backports.shutil-get-terminal-size (1.0.0)
bleach (2.1.3)
certifi (2018.4.16)
chardet (3.0.4)
configparser (3.5.0)
cycler (0.10.0)
decorator (4.3.0)
entrypoints (0.2.3)
enum34 (1.1.6)
functools32 (3.2.3.post2)
future (0.16.0)
futures (3.2.0)
gym (0.10.5)
html5lib (1.0.1)
idna (2.6)
ipykernel (4.8.2)
ipython (5.7.0)
ipython-genutils (0.2.0)
ipywidgets (7.2.1)
Jinja2 (2.10)
jsonschema (2.6.0)
jupyter (1.0.0)
jupyter-client (5.2.3)
jupyter-console (5.2.0)
jupyter-core (4.4.0)
kiwisolver (1.0.1)
MarkupSafe (1.0)
matplotlib (2.2.2)
mistune (0.8.3)
nbconvert (5.3.1)
nbformat (4.4.0)
networkx (1.11)
notebook (5.5.0)
numpy (1.14.4)
pandocfilters (1.4.2)
pathlib2 (2.3.2)
pexpect (4.6.0)
pickleshare (0.7.4)
Pillow (5.1.0)
pip (9.0.1)
prompt-toolkit (1.0.15)
ptyprocess (0.5.2)
pyglet (1.3.2)
Pygments (2.2.0)
pyparsing (2.2.0)
python-dateutil (2.7.3)
pytz (2018.4)
pyzmq (17.0.0)
qtconsole (4.3.1)
requests (2.18.4)
scandir (1.7)
scipy (1.1.0)
Send2Trash (1.5.0)
setuptools (28.8.0)
simplegeneric (0.8.1)
singledispatch (3.4.0.3)
six (1.11.0)
subprocess32 (3.5.2)
terminado (0.8.1)
testpath (0.3.1)
torch (0.4.0)
torchvision (0.2.1)
tornado (5.0.2)
traitlets (4.3.2)
urllib3 (1.22)
wcwidth (0.1.7)
webencodings (0.5.1)
wheel (0.29.0)
widgetsnbextension (3.2.1)
Been able to reproduce this in 3 different machines with cpu and gpu~ |
st100333 | Solved by albanD in post #4
Could you print(x) and print(net.linear1.weight) and print(net.linear1.bias) ? |
st100334 | Hi,
Are you sure x_std is not 0 in your case? Could you print x juste before giving it to the linear layer? |
st100335 | Hi,
Good idea, I did not think that. After printing out I see no zeros ~
Here is the output of x_std:
tensor([[ 92.6681],
[ 404.6292],
[ 404.8976],
[ 250.7227],
[ 250.4256],
[ 404.3423],
[ 405.0550],
[ 251.0206],
[ 92.5387],
[ 93.0143],
[ 250.8486],
[ 250.7405],
[ 251.2732],
[ 251.1312],
[ 405.1576],
[ 251.5628],
[ 405.3494],
[ 251.1480],
[ 251.1116],
[ 250.7486],
[ 92.4530],
[ 404.8680],
[ 405.2231],
[ 404.7466],
[ 251.1800],
[ 251.3008],
[ 404.9897],
[ 92.7674],
[ 251.1480],
[ 250.7218],
[ 250.3925],
[ 251.2240],
[ 250.2169],
[ 404.4504],
[ 250.0995],
[ 251.2035],
[ 92.9458],
[ 249.7354],
[ 404.9981],
[ 404.6703],
[ 251.3260],
[ 405.0110],
[ 250.1668],
[ 404.6394],
[ 250.3278],
[ 251.0508],
[ 251.0729],
[ 250.6330],
[ 405.1168],
[ 250.6799],
[ 92.4707],
[ 251.1223],
[ 250.9282],
[ 92.1929],
[ 251.3193],
[ 92.0439],
[ 251.4816],
[ 250.2316],
[ 251.3165],
[ 405.0492],
[ 250.7802],
[ 250.8962],
[ 404.7462],
[ 404.8201],
[ 251.0861],
[ 251.0475],
[ 250.3490],
[ 404.3512],
[ 251.4652],
[ 250.8202],
[ 404.1475],
[ 250.6063],
[ 405.0731],
[ 250.9353],
[ 250.2173],
[ 250.8657],
[ 251.2978],
[ 249.7068],
[ 250.8334],
[ 250.7502],
[ 249.7354],
[ 250.4344],
[ 251.0381],
[ 250.4152],
[ 250.4865],
[ 404.9912],
[ 250.6725],
[ 250.4263],
[ 251.4652],
[ 92.8861],
[ 250.4647],
[ 250.7429],
[ 250.6835],
[ 250.9700],
[ 404.8431],
[ 250.6500],
[ 250.9970],
[ 250.8521],
[ 405.1579],
[ 404.6438],
[ 250.6725],
[ 250.1192],
[ 250.4342],
[ 404.7541],
[ 404.9801],
[ 404.9246],
[ 251.1625],
[ 93.1926],
[ 404.9897],
[ 404.6542],
[ 405.0998],
[ 250.4555],
[ 250.5661],
[ 250.4902],
[ 251.4263],
[ 250.8609],
[ 251.2598],
[ 404.7595],
[ 405.3346],
[ 249.7911],
[ 250.8093],
[ 250.1427],
[ 92.8163],
[ 250.6440],
[ 250.3875],
[ 250.8757],
[ 251.1032],
[ 404.8384]])
tensor([[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
...,
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.]]) |
st100336 | Actually, below code still gets NaN, even when there is no division by anything:
vec_tensor = torch.from_numpy(vec)
# check if input is nan
if np.isnan(np.sum(vec_tensor.cpu().numpy())):
print("some values from input are nan")
x = vec_tensor[:,0:ZONE_SIZE*2]
x = x.view(-1, ZONE_SIZE * 2)
#x_mean = torch.mean(x, dim=1).view(-1,1)
#x_std = torch.std(x, dim=1).view(-1,1)
#print x_std
#x = (x -x_mean) / x_std
o = net.linear1(x)
print (o) |
st100337 | Ah, I see, the weights, bias are all NaN. Nevermind then . Thanks for checking this. I will go back and check why the network got into this state:
tensor([[ 247.8170, 197.7785, 71.3301, ..., 6.6686,
3.2949, 5.3569],
[ 1083.3394, 876.2274, 329.9731, ..., 4.0454,
4.5783, 2.2610],
[ 1083.3394, 877.2274, 331.9731, ..., 4.0422,
1.6040, 3.9885],
...,
[ 673.0513, 545.6301, 208.7934, ..., 4.0422,
1.6040, 3.9885],
[ 673.0513, 545.6301, 210.7934, ..., 2.9903,
3.0112, 2.0683],
[ 1083.3394, 879.2274, 331.9731, ..., 5.1492,
3.8545, 3.5510]])
Parameter containing:
tensor([[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
...,
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.],
[nan., nan., nan., ..., nan., nan., nan.]])
Parameter containing:
tensor([nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.,
nan., nan., nan., nan., nan., nan., nan., nan., nan., nan.]) |
st100338 | Ok, my problem was that my input was not so well-formed. I had an extremely large float, e.g. -7.4e+168. This caused the linear layer to overflow and return nans. |
st100339 | I have a fairly optimized cnn-blstm-crf tagger here 4 with the crf defined here 6.
On pytorch 0.4.0 using cuda 9.0 and cudnn 7102 I can run a single epoch of the conll 2003 NER task in 21.41 +/- 0.28
When the only thing I change is the version of pytorch to 0.4.1 (the current conda install) a single epoch now takes 27.99 +/- 0.26
Has anyone else noticed anything like this? |
st100340 | I can’t make an issue, I filled out the template but the “submit new issue” button is greyed out.
Edit: I forgot to have a title, was able to post it |
st100341 | If I need to copy a variable created by an operation instead of user,and
let the copy have an independent memory,How can I do for that purpose?
Thank you! |
st100342 | Solved by albanD in post #2
Hi,
You can use the .clone() function directly on the Variable to create a copy. |
st100343 | Hi,
You can use the .clone() function directly on the Variable to create a copy. |
st100344 | indeed,but I am not sure if the gradient can get through correctly during backpropagation in that way:pensive: |
st100345 | @albanD I came across a behavior that I don’t really understand, whereby a cloned variable ends up having no gradient. Could you clarify what is going on here? This is the example:
import torch
from torch.autograd import Variable
def basic_fun(x):
return 3*(x*x)
def get_grad(x):
A = basic_fun(x)
A.backward()
return x.grad
x = Variable(torch.FloatTensor([1]), requires_grad=True)
xx = x.clone()
# this works fine
print(get_grad(x))
# this doesn't
print(get_grad(xx)) |
st100346 | The clone operation corresponds to making a copy of the Tensor contained in this variable.
That means that xx will be a new Variable with its history linked to it.
When you perform the backward pass, the gradients will only be accumulated in the Variables that you created (we call them leaf Variables) and for which you set requires_grad=True:
import torch
from torch.autograd import Variable
def basic_fun(x):
return 3*(x*x)
def get_grad(inp, grad_var):
A = basic_fun(inp)
A.backward()
return grad_var.grad
x = Variable(torch.FloatTensor([1]), requires_grad=True)
xx = x.clone()
# Grad wrt x will work
print(x.creator is None) # is it a leaf? Yes
print(get_grad(x, x))
print(get_grad(xx, x))
# Grad wrt xx won't work
print(xx.creator is None) # is it a leaf? No
print(get_grad(xx, xx))
print(get_grad(x, xx)) |
st100347 | Thanks a lot for you answer. So if I understand correctly a Variable needs to have a creator which is None to be considered a leaf, and only in that case the gradient will be accumulated in it.
So then if I want to initialize a new Variable using values from another (effectively making a copy) I would go for xx = Variable(x.data, requires_grad=True), or is there a different option? |
st100348 | It seems that by doing this xx and x still share the same memory.
Try xx = Variable(x.data.clone(), requires_grad=True). |
st100349 | @percqdeng no it won’t, calling .clone() will create a new storage with new memory and copy the content of the original into this new memory. |
st100350 | alan_ayu:
let the copy have an independent memory
That’s right. Assuming that the goal is to “let the copy have an independent memory” (by alan_ayu) and " initialize a new Variable using values from another " (by pietromarchesi) , we should use x.data.clone().
Otherwise, xx appears to be just a reference to value in x. |
st100351 | Let’s say you want to do the trick of ResNet bypass.
At some point you want to create a copy of x and add it to a later result.
How would you do it? |
st100352 | Hey guys.
I am in a situation where I want to access the values in a 1D tensor by means of an integer index. So, perhaps something like this:
def basic_fun(x_cloned):
res = []
for i in range(len(x)):
res.append(x_cloned[i] * x_cloned[i])
print(res)
return Variable(torch.FloatTensor(res))
def get_grad(inp, grad_var):
A = basic_fun(inp)
A.backward()
return grad_var.grad
x = Variable(torch.FloatTensor([1, 2, 3, 4, 5]), requires_grad=True)
x_cloned = x.clone()
print(get_grad(x_cloned, x))
I am getting the following error message:
[tensor(1., grad_fn=<ThMulBackward>), tensor(4., grad_fn=<ThMulBackward>), tensor(9., grad_fn=<ThMulBackward>), tensor(16., grad_fn=<ThMulBackward>), tensor(25., grad_fn=<ThMulBackward>)]
Traceback (most recent call last):
File "/home/mhy/projects/pytorch-optim/predict.py", line 74, in <module>
print(get_grad(x_cloned, x))
File "/home/mhy/projects/pytorch-optim/predict.py", line 68, in get_grad
A.backward()
File "/home/mhy/.local/lib/python3.5/site-packages/torch/tensor.py", line 93, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/mhy/.local/lib/python3.5/site-packages/torch/autograd/__init__.py", line 90, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn
And I dont understand how using the cloned version of a variable is supposed to keep that variable in gradient computation. The variable itself is effectively not used in the computation of A, and so when you call A.backward(), it should not be part of that operation.
I appreciate your help! |
st100353 | And if I change my basic_fun function to return torch.cat(res), I get the error:
RuntimeError: zero-dimensional tensor (at position 0) cannot be concatenated |
st100354 | How to know which particular layer or Module (ASPP, Vortex Pooling) is consuming more Time and Operations in a Model Architecture? |
st100355 | I think you could add some code for timing in your forward function. For example,
def forward(self, x):
start = time.time()
out = self.aspp(x)
print(time.time() - start)
I am new to pytorch, so not sure if there is other better way to do it. |
st100356 | You could use the pytorch bottleneck 22 to do so.
Note: if you want to do timing like this, you need to call
torch.cuda.synchronize()
before calling time.time() due to the asynchronous behavior of cuda operations. |
st100357 | torch.clamp( input, min, max, out=None)
Clamp all elements in input into the range [min, max] and return a resulting tensor.
Can clamp() 9 assign out-of-range:
value < min
and
value > max
to zero instead to assigning it to min and max respectively? |
st100358 | Possible solution:
zero_tensor = torch.zeros(input.size())
torch.where(input > min, input, zero_tensor)
torch.where(input < max, input, zero_tensor)
where() 11 is only available in 0.4 or higher versions. |
st100359 | You could also do something like
tensor[tensor!=torch.clamp(tensor, min, max)] = 0 |
st100360 | Hi,
I am trying to simply train resnet50 on the imagenet dataset, with the given code in the examples of pytorch. No matter what i have tried, the training accuracy remains very very low, over numerous epochs. I have tried with no change to the parameters, and with setting lower learning rates. Both on single and multi gpus. Any idea what needs to be done to get the expected results ? looking at training graphs on the net, it seems that like other trainings, the accuracy rises quickly on the first epochs.
i do get these errors:
/usr/local/lib/python2.7/dist-packages/PIL/TiffImagePlugin.py:742: UserWarning: Corrupt EXIF data. Expecting to read 4 bytes but only got 0.
warnings.warn(str(msg))
but from what i read it seems these shouldn’t effect the traning.
Thanks |
st100361 | Hello,
Is it possible to get groups’ running stats in GroupNorm module? if not why?
Thanks |
st100362 | how can i clip my tensor to given range. is there any function like theano.tensor.clip in theano? |
st100363 | Have a look at torch.clamp http://pytorch.org/docs/master/torch.html?#torch.clamp 18.5k |
st100364 | Both version are available.
torch.clamp 1.2k: not in-place
torch._clamp 915: In-place version of clamp() |
st100365 | I wanted to write a custom loss function by subclassing the _Loss superclass. At first I didn’t pay much attention to the underscore until the following import failed:
from torch.nn.modules import _Loss
After that I noticed that _Loss is not included in the imports within the __init__.py.
While this can easily be circumvented by importing it directly
from torch.nn.modules.loss import _Loss
I was left wondering what the intentions might be.
Shall custom loss functions not subclass _Loss but rather module directly? If not, why protect and ‘hide’ it? |
st100366 | Hi everybody,
I want to predict different images using my trained network.
For some reasons this code works only with one image, if I want to use different others images this doesn’t work.
I got the error:
RuntimeError: invalid argument 2: size '[-1 x 400]' is invalid for input with 880 elements at /opt/conda/conda-bld/pytorch_1532579805626/work/aten/src/TH/THStorage.cpp:80
So why this works only with one picture?
This is full code here:
import torch
from pytorch import Net
import torchvision.transforms as transforms
from PIL import Image
import numpy as np
transform = transforms.Compose([
transforms.Resize(32),
transforms.ToTensor(),
transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))
])
image = Image.open("plane.png").convert('RGB')
x = transform(image)
x = x.unsqueeze(0)
pix = np.array(x) #convert image to numpy array
print((image))
image.show()
net = Net()
net.eval()
img = torch.from_numpy(pix)
net = torch.load('pytorch_Network2.h5')
idx_to_class = {
0: 'airplane',
1: 'automobile',
2: 'bird',
3: 'cat',
4: 'deer',
5: 'dog',
6: 'frog',
7: 'horse',
8: 'ship',
9: 'truck'
}
output = net(img)
pred = torch.argmax(output, 1)
for p in pred:
cls = idx_to_class[p.item()]
print(cls) |
st100367 | Solved by IK_KLX in post #4
Solution is to resize image to certain size. In my situation it is
image = image.resize((860, 368)) |
st100368 | Solution is to resize image to certain size. In my situation it is
image = image.resize((860, 368)) |
st100369 | Hi
I am trying to use a tensor which stores outputs of each LSTM operation.
For example,
def __init__(self):
self.register_buffer('h_list', torch.FloatTensor(10, 5, 20).detach())
self.h_list = self.h_list.cuda()
self.net = net
def reset_h_list(self):
self.register_buffer('h_list', torch.FloatTensor(10, 5, 20).detach())
self.h_list = self.h_list.cuda()
def forward(self, input):
h = self.h_list[:, -5, :].contiguous().view(10, 100, -1)
next_h = self.net(input)
self.h_list = torch.cat((self.h_list, next_h), dim=1)
return next_h
h_list should be initialize after end of the LSTM with the size (10, 5, 20).
Additionally, it is not trainable data at all since it just store the outputs to stack the feature of every LSTMCell iteration.
However, the memory usage is getting bigger when I run this code.
If you want additional explanation, please ask me
Thank you |
st100370 | Suppose my loss is computed with a variable which was the output of my model, i.e.
loss = A (will constantly change in the subsequent iterations depending on the gradient) + B (was an output of the model, but is fixed after that)
How do i call loss.backward()?
I keep getting
RuntimeError: Trying to backward through the graph a second time, but the buffers have already been freed. Specify retain_graph=True when calling backward the first time.
The problem seems to be that I can’t let B be fixed, since it was the output of the model and it somehow thought it needs some buffers of B.
I do not want to retain any buffer (there shouldn’t be any reason I need to, should there? B is supposed to be fixed throughout, just passing through the model once), I just want a fresh iteration of backpropagation with A being updated based on the gradient and B fixed. |
st100371 | As far as I understand, you should at least retain the graph of B since pytorch saves intermediate tensors in that graph, not in components of nn module.
If your loss is really constructed with form of A + B, you can call A.backward() and B.backward(retain_graph=True) separately. |
st100372 | Ermm, I can’t possible call A.backward(), it is not a scalar. To be exact, my loss calculation is as such
distance_target = torch.sum((feature_adversarial - feature_target.clone()) **2) distance_identity = torch.sum((feature_adversarial - feature_identity.clone()) ** 2) loss = distance_target + 2*distance_identity loss.backward()
features are vector of size (1, 512).
feature_target and feature_identity are outputs of my model as well, but they are fixed thereafter.
The only part that changes is feature_adversarial.
Are you suggesting that I instantiate two separate instance of the same models? |
st100373 | So your code has form like this?
feature_target = model(input1)
feature_identity = model(input2)
for i, data in enumerate(loader):
feature_adversarial = model(data)
distance_target = torch.sum((feature_adversarial - feature_target.clone()) **2)
distance_identity = torch.sum((feature_adversarial - feature_identity.clone()) ** 2)
loss = distance_target + 2*distance_identity
loss.backward()
In that case you should call loss.backward(retain_graph=True) instead of loss.backward() to retain saved intermediate tensors those are necessary to compute gradients. Otherwise you would lose the information for gradient computation.
If it’s not the case, please tell me what do you mean to say but they are fixed thereafter. in detail. Does it mean you want feature_target and feature_identity don’t affect on model update? |
st100374 | Yup, just like the code you posted. So feature_target and feature_identity are fixed in the loop.
Just to understand more about retain_graph, why should I retain it? Which information from (i-1)-th iteration does i-th iteration needs. As far as I know, there should be none, feature_target and feature_identity should be treated as constants. What am I missing here?
feature_target = model(input1)
feature_identity = model(input2)
feature_adversarial = model(data)
for i in range(100):
distance_target = torch.sum((feature_adversarial - feature_target.clone()) **2)
distance_identity = torch.sum((feature_adversarial - feature_identity.clone()) ** 2)
loss = distance_target + 2*distance_identity
loss.backward()
feature_adversarial = update(feature_adversarial)
model(feature_adversarial)
My code is something more like this. |
st100375 | Basically retain_graph option retains the intermediate tensors created while construction of computation graph. Without this, backward function would destroy all the connected computation graph.
stackoverflow.com
What does the parameter retain_graph mean in the Variable's backward() method? 7
neural-network, conv-neural-network, backpropagation, pytorch
asked by
jvans
on 04:11PM - 16 Oct 17
Picture in above question could be helpful.
In this case, what we want to retain is the intermediate tensors created from
feature_target = model(input1)
feature_identity = model(input2)
part, which should be used to compute gradients later.
If you call loss.backward() without retain_graph option, you will lose information about not only feature_adversarial of i-th iteration, but also feature_target and feature_identity since loss computation graph contains computation graphs of them as well.
Of course information of feature_adversarial would be retained if you use loss.backward(retain_graph=True), which is wasteful.
But in my personal opinion, maybe pytorch will destroy computation graph of feature_adversarial of i-th iteration when new feature_adversarial is constructed on i+1 th iteration. |
st100376 | I dont quite agree with you that we need any intermediate tensors created from
feature_target = model(input1)
feature_identity = model(input2)
The only tensors required for the loss computation are the feature_target and feature_identity, not their intermediate tensors.
I will use retain graph for now but it doesn’t feel right to me. Hopefully someone from the dev team can help to explain the best way to do this. |
st100377 | I read the discussion regardiong the num_workers.
but im still confuse…
is larger num_workers better (as long as you dont get a memory error)?
is larger num_workers can lead to better results or worse results? does it even make any differences? |
st100378 | isalirezag:
is larger num_workers better (as long as you dont get a memory error)?
No, really large number of workers increase communication overhead, and can even slow down data loading.
isalirezag:
is larger num_workers can lead to better results or worse results? does it even make any differences?
it doesn’t affect results |
st100379 | I’ve encountered problem when installing pytorch from source (the ‘master’ branch) in ‘Anaconda’ following the instruction 2 . I’m interested in installing from source because i attempt to allow pyTorch to support my old GPU.
Can anyone help to explain why it needs to set for the ‘NVTOOLEXT_HOME’ env variable ? anything might be potentially missing in my installation steps ?
Thanks,
...
(pytorchtest) C:\myproject\python>call "C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Auxiliary\Build\vcvarsall.bat" x64 -vcvars_ver=14.11
**********************************************************************
** Visual Studio 2017 Developer Command Prompt v15.8.3
** Copyright (c) 2017 Microsoft Corporation
**********************************************************************
[ERROR:vcvars.bat] Toolset directory for version '14.11' was not found.
[ERROR:VsDevCmd.bat] *** VsDevCmd.bat encountered errors. Environment may be incomplete and/or incorrect. ***
[ERROR:VsDevCmd.bat] In an uninitialized command prompt, please 'set VSCMD_DEBUG=[value]' and then re-run
[ERROR:VsDevCmd.bat] vsdevcmd.bat [args] for additional details.
[ERROR:VsDevCmd.bat] Where [value] is:
[ERROR:VsDevCmd.bat] 1 : basic debug logging
[ERROR:VsDevCmd.bat] 2 : detailed debug logging
[ERROR:VsDevCmd.bat] 3 : trace level logging. Redirection of output to a file when using this level is recommended.
[ERROR:VsDevCmd.bat] Example: set VSCMD_DEBUG=3
[ERROR:VsDevCmd.bat] vsdevcmd.bat > vsdevcmd.trace.txt 2>&1
Building wheel torch-0.5.0a0+70d93f4
Traceback (most recent call last):
File "C:\myproject\python\pytorch\setup.py", line 919, in <module>
nvtoolext_lib_path = NVTOOLEXT_HOME + '/lib/x64/'
TypeError: unsupported operand type(s) for +: 'NoneType' and 'str' |
st100380 | There is a problem when installing CUDA 9.2 with VS integration. I had to deselect VS integration to finish CUDA installation. It turns out that NAVIDIA NSight is looking for ‘devenv.com’ from different path. So, it is a problem when installing Microsoft Visual Studio community 2017 edition. Reinstalling does not work. After applying the workaround [1], NVTX can be installed successfully.
However, when attempting to install from source again, it complains:
c:\program files\nvidia gpu computing toolkit\cuda\v9.2\include\crt/host_config.h(133): fatal error C1189: #error: -- unsupported Microsoft Visual Studio version! Only the versions 2012, 2013, 2015 and 2017 are supported! [C:\oak-project\python\pytorch\build\caffe2\caffe2_gpu.vcxproj]
I only have VS 2017 installed and set ‘-vcvars_ver=14.15’ (by checking the actual directory in C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\VC\Tools\MSVC)
[1] https://devtalk.nvidia.com/default/topic/1016018/nsight-visual-studio-edition/nsight-installation-problem-windows-10-setup-wizard-ended-prematurely-because-of-an-error/ 10 |
st100381 | Build can continue by manually changing line 131 to #if _MSC_VER < 1600 || _MSC_VER > 1915
see also: https://stackoverflow.com/questions/47645436/cuda-9-unsupported-error-with-vs-2017 18 |
st100382 | now, i’m stuck in compiling caffe2. It complains:
C:\Program Files (x86)\Microsoft Visual Studio\2017\Community\Common7\IDE\VC\VCTargets\Microsoft.CppCommon.targets(209,5): error MSB6006: "cmd.exe" exited with code 1. [C:\python\pytorch\build\caffe2\caffe2_gpu.vcxproj] |
st100383 | Would you please give me the full log? This error message is not the root cause of the build failure. |
st100384 | I’m afraid that you are using the wrong version of VS. I have not tested VS 14.15 before but actually you can try to install 14.11. That’s the version we used in the build. |
st100385 | I am trying to remove the structure in images (say a 28x28 MNIST digit image) while keeping the distribution of each pixel the same. To achieve this I need to independently permute each pixel along the batch dimension. I could use torch.randperm to shuffle the indices for each pixel or numpy.random.permutation to do the permutation directly. However both of these functions only operate on the first dimension of the tensor which means I would need to run them in a 28x28 for loop.
Is there a more computationally efficient way to do this? |
st100386 | If you are permuting along the batch dimension each sample will have pixel information of some other samples from the batch. Is that your intention or do you rather want to permute the pixels in each sample in a defined manner?
In the latter case, you can just use this code sample:
dataset = datasets.MNIST(root='./data',
download=False,
transform=transforms.ToTensor())
shuffle_idx = torch.randperm(28*28)
data = [dataset[i][0] for i in range(10)]
target = torch.stack([dataset[i][1] for i in range(10)], dim=0)
# Show first sample
plt.figure()
plt.imshow(data[0][0])
# Permute the pixels
data = torch.stack([x.view(-1)[shuffle_idx].view(1, 28, 28) for x in data], dim=0)
# Show after permutation
plt.figure()
plt.imshow(data[0][0]) |
st100387 | Thank you for your reply. However correct me if I’m wrong but this code will permute all the pixels within the same image. This means that pixels in the corner of the image which are typically always dark could be swapped for pixels in the middle which have much more variation. What I am trying to achieve is for instance to randomly replace the top left corner pixel of image 1 with the top left corner pixel of another image within the same batch and so on for all pixels so as to preserve the distribution of each pixel but remove the dependencies between pixels of the same image.
The best implementation I could come up with is below but I am wondering if there is a more computationally efficient solution which avoids iterating over every pixel.
from torchvision import datasets, transforms
import matplotlib.pyplot as plt
import torch
dataset = datasets.MNIST(root='./data',
download=True,
transform=transforms.ToTensor())
n = 10
data = torch.stack([dataset[i][0] for i in range(n)], dim=0)
target = torch.stack([dataset[i][1] for i in range(n)], dim=0)
# Show first sample
plt.figure()
plt.imshow(data[0][0])
# Permute the pixels
data_pixelshuffled = torch.stack([x[torch.randperm(n)] for x in data.view(n,-1).t()],
dim=0).t().view(-1,28,28)
# Show after permutation
plt.figure()
plt.imshow(data_pixelshuffled[0])
plt.show() |
st100388 | The code looks alright.
You might pre-compute the indices and use scatter, which might be a bit faster:
idx = torch.stack([torch.randperm(n) for _ in range(28*28)]).t().long()
data_shuffled = torch.zeros(10, 784).scatter_(0, idx, data.view(10, -1)).view(10, 28, 28) |
st100389 | I have my PyTorch model and I need to make an android app using it for a demo. I was thinking of somehow converting it into a tensorflow model by manually copying weights using a dictionary but is there any easier way? Thanks in advance |
st100390 | Yeah, you’ll have to use something like TensorFlow or Caffe2. We’re working on ways to improve exporting models, but for now you’ll have to:
Get the weights via model.state_dict()
Convert the Tensors to NumPy arrays
Copy the weights into your TensorFlow or Caffe2 model |
st100391 | I’ve been having lots and lots of trouble trying to convert my model to tensorflow. Does torch-android still have support? I’m unable to build it for some reason. |
st100392 | maybe it would be better to have isolated model on backend of your application since you don’t have ability to run in directly on devices? |
st100393 | @colesbury: Have there been any recent developments on this topic since you posted your last answer (Aug’17)? Or is the process still roughly the same? Thank you so much. |
st100394 | I am going to build a computer with 4 GPUs. But, as far as I know, we cannot utilize all the 4 GPUs with x16 PCI lanes. Thus, I am considering two options as follows:
Two machines: each has 1 CPU and 2 GPUs, so that they can operate at x16 lanes. Then, connecting the two machines by using the “Distributed” package.
One machine: it has 1 CPU and 4 GPUs. Thus, they operate at x8 lanes. Then, use “Dataparallel” to utilize all the 4 GPUs.
I currently have no machines to test the above two settings.
If anyone has experience, please share it.
Thanks! |
st100395 | What should happen on not using loss.item() for logging the loss values.
I was using AverageMeter (from imagenet tutorial 2) to store losses and I forgot using loss.item(). I believed it should have increased my GPU util and caused error.
But instead somethin strange happened. It started increasing RAM utilization and literally froze my system. Why did this happen (and not what I expected happened!!)? |
st100396 | Solved by tom in post #2
Tensors come with metadata (information about stride, size, type, …) in CPU memory and you probably kept a lot of these around, possibly also some graphs to be used in gradient calculations. When you only have a scalar value, the CPU memory usage probably exceeds the GPU memory use by quite a margin… |
st100397 | Tensors come with metadata (information about stride, size, type, …) in CPU memory and you probably kept a lot of these around, possibly also some graphs to be used in gradient calculations. When you only have a scalar value, the CPU memory usage probably exceeds the GPU memory use by quite a margin, so if you have a small machine with a sizeable GPU…
Best regards
Thomas |
st100398 | Is there any API/MPI I can use to see the real-time profile when I load a model to GPU for training? |
st100399 | I generally use torch.cuda.memory_allocated 88 to check how much memory is taken up by my tensors (models, input etc). |