instruction
stringlengths 13
150
| input
stringlengths 36
29.1k
| output
stringlengths 31
29.5k
| source
stringlengths 45
45
|
---|---|---|---|
What is the impact of `pos_weight` argument in `BCEWithLogitsLoss`? | According to the pytorch doc of nn.BCEWithLogitsLoss, pos_weight is an optional argument a that takes the weight of positive examples. I don't fully understand the statement "pos_weight > 1 increases recall and pos_weight < 1 increases precision" in that page. How do you guys understand this statement?
| The binary cross-entropy with logits loss (nn.BCEWithLogitsLoss, equivalent to F.binary_cross_entropy_with_logits) is a sigmoid layer (nn.Sigmoid) followed with a binary cross-entropy loss (nn.BCELoss). The general case assumes you are in a multi-label classification task i.e. a single input can be labeled with multiple classes. One common sub-case is to have a single class: the binary classification task. If you define q as your tensor of predicted classes and p the ground-truth [0,1] corresponding to the true probabilities for each class.
The explicit formulation for the binary cross-entropy would be:
z = torch.sigmoid(q)
loss = -(w_p*p*torch.log(z) + (1-p)*torch.log(1-z))
introducing the w_p, the weight associated with the true label for each class. Read this post for more details on the weighting scheme used by the BCELoss.
For a given class:
precision = TP / (TP + FP)
recall = TP / (TP + FN)
Then if w_p > 1, it increases the weight on the positive classification (classifying as true). This will tend to increase false positives (FP), thus decreasing the precision. Similarly if if w_p < 1, we are decreasing the weight on the true class which means it will tend to increase false negatives (FN), which decreases recall.
| https://stackoverflow.com/questions/71051342/ |
Is it possible to get dataset file infromation at the time of test a model? | My dataset code is like the below one; Here, X_test is a list[list] and y_test is list[Path]
The first.py file
self.test_dataset = LongDataset(
X_path=X_test,
y_path=y_test,
transform=val_transforms,
)
The rest of the part is as usual (dataloader)
def test_dataloader(self):
return DataLoader(self.test_dataset, batch_size=1, num_workers=8)
In the second.py file
The DataModule
data_module = DataModuleLong(batch_size=3,)
The Trainer
trainer = Trainer(gpus=1)
trainer.test(
model=model,
ckpt_path=ckpt_path,
datamodule=data_module,
)
The train_step() in the third.py file
def test_step(self, batch, batch_idx: int):
inputs, targets = batch
logits = self(inputs)
...
...
...
Now, is it possible to print (in the train_step()) the (inputs, targets) filename (or the full path) I am sending from test_dataset as (X_path, y_path)?
| Essentially, what you want to do is get the index of each batch element in the batch returned by the dataloader object (from there it is trivial to index the dataset to get the desired data elements (in this case file paths).
Now the short answer is that there is no directly implemented way to return this data using the dataloader. However, there are a few workarounds:
Pass your own BatchSampler or Sampler object to the DataLoader constructor. Unfortunately there's not a simple way to query the Sampler for the current batch because it relies on generators (where yielding the next sample clears it and loades the next one. This is the same reason why you can't directly access the batch indices of the Dataloader. So to use this method, you'd have to pass a sampler wherein you know a priori which indices will be returned on the i-th query to the sampler. Not an ideal solution.
Create a custom dataset object - this is actually extremely easy to do, simply inherit from the torch.data object and implement the __init__, __len__ and __getitem__ methods. The __getitem__ method takes an index (let's say idx) as input and returns that index of the dataset. You can essentially copy the code for the existing LongDataset line for line, but simply append idx to the returned values from the __getitem__ method. I would demonstrate but you don't indicate where the LongDataset code comes from.
def __getitem__(self,idx):
... #load files, preprocess, etc.
return data, idx
Now dataloader will automatically zip the idx values for each, so you can simply replace the existing line with:
inputs, targets, indices = batch
data_paths = [self.test_dataset.file_paths[idx] for idx in indices]
The second solution is by far preferable as it is more transparently easy to understand.
| https://stackoverflow.com/questions/71070249/ |
std::vector, how does it store values with libtorch tensors? | When I was collecting trainable parameters as vector<torch::tensor>, I've realized that it is type cast to torch::autograd::VariableList.
With this structure, how does the vector access its element? Does it store the value's memory space even without explicitly having to call them by pointer or reference?
So I've tested with some simple codes like this.
With regular int data type:
int a = 10;
std::vector<int> b;
b.push_back(a);
b[0] += 10;
cout << b << endl;
cout << a << endl;
As expected, this produces 20 for b (only one element), and 10 for a (original int data)
However, for the torch::Tensor with the same style of codes:
torch::Tensor t = torch::ones({ 1 });
std::vector<torch::Tensor> tv;
tv.push_back(t);
tv[0] += 10;
cout << t << endl;
cout << tv << endl;
Just like the int vectors, I thought tv will produce 11 (one element vector), and v is just 1 (shape 1)
However, the results for both tv and v are updated to 11.
Although the operation is done on the vector, the original tensor value is also updated. Why does this happen?
My guess is torch::autograd::Variable list stores its element by memory address...?
Also, when you do,
torch::Tensor t = torch::ones({ 1 });
std::vector<torch::Tensor> tv;
tv.push_back(t);
tv[0] = tv[0] + 10;
cout << t << endl;
cout << tv << endl;
Only tv value is updated to 11 and original tensor t is the same 1.
I mean this makes collecting trainable parameters and passing them to the optimizer much easier, but I am really not sure about how this happens.
Could you please kindly explain to me why these cases are all different and how the vector stores the elements in each case?
Thank you very much for your help in advance!
| This is not strange behaviour of std::vector, it is strange behaviour of torch::Tensor. The following should also exhibit it.
int a = 10;
int b = a;
b += 10;
std::cout << b << std::endl;
std::cout << a << std::endl;
torch::Tensor c = torch::ones({ 1 });
torch::Tensor d = c;
d += 10;
std::cout << d << std::endl;
std::cout << c << std::endl;
torch::Tensor e = torch::ones({ 1 });
torch::Tensor f = e;
f = f + 10;
std::cout << f << std::endl;
std::cout << e << std::endl;
A std::vector<T> allocates some space, and constructs T instances in that space. The particular constructor used depends on how you insert. In your case push_back uses the copy constructor of the type (it would use the move constructor if given an rvalue).
| https://stackoverflow.com/questions/71082120/ |
How do I solve error: Tensor object has not atrribute 'fold' | I have a method that divides an image into patches and change the colour of a specified patch. I tried merging the patches together after the manipulation and I got the error: AttributeError: 'Tensor' object has no attribute 'fold'
def perturb_patch(img, patch_idx, patch_size, stride):
img = img.unsqueeze(0)
patches = img.unfold(2, patch_size, stride).unfold(3, patch_size, stride)
patches = patches.reshape(1, 3, -1, patch_size, patch_size)
patches = patches.squeeze(0).permute(1, 0, 2, 3)
patches[patch_idx][0, :, :] = 0.09803922
patches[patch_idx][1,:, :] = 0.21333333
patches[patch_idx][2,:, :] = 0.61176471
merged_patches = patches.fold(img.shape[-2:], kernel_size=16, stride=16, padding=0)
return merged_patches
When I tried returning patches instead of merged_patches with new_img = perturb_patch(img, 6, 16, 16), I could visualize the patches and the manipulated patch was noticeable. How do I merge this patches together to form the original image which of size (3, 224, 224)?
| So, I was able to find an alternative to merge the patches together with the pytorch's view method here.
updated code:
def perturb_patch(img, patch_idx, patch_size, stride):
img = img.unsqueeze(0)
patches = img.unfold(2, patch_size, stride).unfold(3, patch_size, stride)
patches = patches.reshape(1, 3, -1, patch_size, patch_size)
patches = patches.squeeze(0).permute(1, 0, 2, 3)
patches[patch_idx][0, :, :] = 0.09803922
patches[patch_idx][1,:, :] = 0.21333333
patches[patch_idx][2,:, :] = 0.61176471
unsqueezed_patch = patches.unsqueeze(0)
grid_size = (14, 14)
batch_size, num_patches, c, height, width = unsqueezed_patch.size()
image = unsqueezed_patch.view(batch_size, grid_size[0], grid_size[1], c, height, width)
output_height = grid_size[0] * height
output_width = grid_size[1] * width
image = image.permute(0, 3, 1, 4, 2, 5).contiguous()
image = image.view(batch_size, c, output_height, output_width)
return image
| https://stackoverflow.com/questions/71085164/ |
Unknown category '2' encountered. Set `add_nan=True` to allow unknown categories pytorch_forecasting | error: "Unknown category '2' encountered. Set add_nan=True to allow unknown categories" while creating time series dataset in pytorch forecasting.
training = TimeSeriesDataSet(
train,
time_idx="index",
target=dni,
group_ids=["Solar Zenith Angle", "Relative Humidity","Dew
Point","Temperature","Precipitable Water", "Wind Speed"],
min_encoder_length=max_encoder_length // 2, # keep encoder length long (as it is in the
validation set)
max_encoder_length=max_encoder_length,
min_prediction_length=1,
max_prediction_length=max_prediction_length,
static_reals=["Wind Direction"],
time_varying_known_reals=["index", "Solar Zenith Angle", "Relative Humidity","Dew
Point","Temperature","Precipitable Water"],
# time_varying_unknown_categoricals=[],
time_varying_unknown_reals=[dhi,dni,ghi],
categorical_encoders={data.columns[2]: NaNLabelEncoder(add_nan=True)},
target_normalizer=GroupNormalizer(
groups=["Solar Zenith Angle", "Relative Humidity","Dew
Point","Temperature","Precipitable Water", "Wind Speed"], transformation="softplus"
), # use softplus and normalize by group
add_relative_time_idx=True,
add_target_scales=True,
add_encoder_length=True,
)
| Try adding pytorch_forecasting.data.encoders.NaNLabelEncoder(add_nan=True), as in this example:
max_prediction_length = 1
max_encoder_length = 27
training = TimeSeriesDataSet(
sales_train,
time_idx='dayofyear',
target="QTT",
group_ids=['S100','I100','C100','C101'],
min_encoder_length=0,
max_encoder_length=max_encoder_length,
min_prediction_length=1,
max_prediction_length=max_prediction_length,
static_categoricals=[],
static_reals=['S100','I100','C100','C101'],
time_varying_known_categoricals=[],
time_varying_known_reals=['DATE'],
time_varying_unknown_categoricals=[],
time_varying_unknown_reals=['DATE'],
categorical_encoders={
'S100': *pytorch_forecasting.data.encoders.NaNLabelEncoder(add_nan=True),*
'I100':pytorch_forecasting.data.encoders.NaNLabelEncoder(add_nan=True),
'C100':pytorch_forecasting.data.encoders.NaNLabelEncoder(add_nan=True),
'C101':pytorch_forecasting.data.encoders.NaNLabelEncoder(add_nan=True)
},
add_relative_time_idx=True,
add_target_scales=True,
add_encoder_length=True,
allow_missing_timesteps=True
)
print ('Executado')
| https://stackoverflow.com/questions/71098518/ |
Input 1D array with float datatype in C++ | I would like to input row = [0.160625, 0.967468297, 3.520480583, 0.862454481, -0.341933766] as entry which is float type and pass it to forward module. I used python trying to translate to C++,
I got syntax error. Support needed. Thanks!
// run not okay
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
row = [0.190625, 0.957468297, 4.520480583, 0.962454481, -0.241933766]
inputs.push_back(torch::tensor(row));
// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();
std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/4) << '\n';
I would like to use row as instance and get the output.
When I use dummy values such as torch::one({1, 5}), the app run ok.
However, when I passed the real value as row - the float array, the app is aborted.
// run ok for this case
// Create a vector of inputs.
std::vector<torch::jit::IValue> inputs;
inputs.push_back(torch::ones({ 1, 5}));
// Execute the model and turn its output into a tensor.
at::Tensor output = module.forward(inputs).toTensor();
std::cout << output.slice(/*dim=*/1, /*start=*/0, /*end=*/4) << '\n';
| Did you try replacing [] with {} as mentioned before?
float row[] = { 0.190625, 0.957468297, 4.520480583, 0.962454481, -0.241933766 };
| https://stackoverflow.com/questions/71103057/ |
stylegan3 stylegan2-ada tensor mismatch error for every 256 or 512 flickr related model | Anyone having the same tensor size mismatch when trying finetuning on ffhq,ffhqu or celebahq models with stylegan3 (and with --cfg=stylegan2)?
With afhqv2 and metfaces I had no problems at 512 and 1048 sizes.
Error:
...
File "/home/ubuntu/stylegan3/training/training_loop.py", line 162, in training_loop
misc.copy_params_and_buffers(resume_data[name], module, require_all=False)
File "/home/ubuntu/stylegan3/torch_utils/misc.py", line 162, in copy_params_and_buffers
tensor.copy_(src_tensors[name].detach()).requires_grad_(tensor.requires_grad)
RuntimeError: The size of tensor a (512) must match the size of tensor b (256) at non-singleton dimension 0
example command:
python "train.py" --outdir=training-runs --cfg=stylegan3-r --data="datasets/256.zip" --gpus=1 --batch=16 --batch-gpu=16 --gamma=6.6 --mirror=1 --kimg=2 --snap=5 --resume=https://api.ngc.nvidia.com/v2/models/nvidia/research/stylegan3/versions/1/files/stylegan3-r-ffhqu-256x256.pkl
I've verified I was passing 256 images built with the tool:
python dataset_tool.py --source="img/" --dest="datasets/256.zip" --resolution='256x256'
Note: I was able to finetune only with this version stylegan2-ffhq-512x512.pkl
| If you are trying to do transfer learning on "stylegan3-r-ffhqu-256x256.pkl", you should add
--cbase=16384
in your python "train.py" ...
command line
| https://stackoverflow.com/questions/71103106/ |
Intermediate layer outputs pytorch | I have Alexnet neural network:
class AlexNet(nn.Module):
def __init__(self, num_classes=100):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), 256 * 6 * 6)
x = self.classifier(x)
return x
I am trying to get the information of the intermediate layers (for example the penultimate layer ) with backward hook but I couldn't get it
| According to this answer
You have to split your model in different parts and create methods to access them parts such as :
class AlexNet(nn.Module):
def __init__(self, num_classes=100):
super(AlexNet, self).__init__()
self.features = nn.Sequential(
nn.Conv2d(3, 64, kernel_size=11, stride=4, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(64, 192, kernel_size=5, padding=2),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
nn.Conv2d(192, 384, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(384, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.Conv2d(256, 256, kernel_size=3, padding=1),
nn.ReLU(inplace=True),
nn.MaxPool2d(kernel_size=3, stride=2),
)
self.classifier = nn.Sequential(
nn.Dropout(),
nn.Linear(256 * 6 * 6, 4096),
nn.ReLU(inplace=True),
nn.Dropout(),
nn.Linear(4096, 4096),
nn.ReLU(inplace=True),
nn.Linear(4096, num_classes),
)
def getFeatures(self,x):
x = self.features(x)
return x.view(x.size(0), 256 * 6 * 6)
def forward(self, x):
x = self.features(x)
x = x.view(x.size(0), 256 * 6 * 6)
x = self.classifier(x)
return x
This way is quite common and you can find plenty of examples.
| https://stackoverflow.com/questions/71110235/ |
Pytorch Geometric Datasets | I need your help. I have two set of graph structured data, one from Open Graph Benchmark (OGB) and another created with torch_geometric.data.Dataset from my own data . The data looks like:
Data(edge_index=[2, 88], edge_attr=[88, 3], x=[39, 9], y=[1, 1]) #OGB
Data(x=[23, 9], edge_index=[2, 48], edge_attr=[48, 2], y=[1]) #PyG
I am trying to use a framework developed using OGB functions, this doesn't work with data created using PyG. For example: the first part of the framework load and split the dataset into train, val and test:
# Set the random seed
random.seed(random_seed)
np.random.seed(random_seed)
# Create data loaders
split_idx = dataset.get_idx_split() # train/val/test split
loader_dict = {}
for phase in split_idx:
batch_size = 32
loader_dict[phase] = DataLoader(dataset[split_idx[phase]], batch_size=batch_size, shuffle=False)
When I run this code with a native ogb dataset I have no problems, when I use the PyG data return the error:
AttributeError
This is strange because they are both Pytorch objects, the only difference is that the OGB dataset is an InMemoryDataset and the PyG one is a 'Larger' dataset (https://pytorch-geometric.readthedocs.io/en/latest/notes/create_dataset.html). Is there any way to fix this without having to change the source code?
Thanks!
| If you want to use the same code, you need to implement get_idx_split for your own dataset.
You can find the desired return structure in the OGB GitHub, e.g. here:
def get_idx_split(self):
< ... do something to retrieve train/test/validation set>
return {'train': train_idx, 'valid': valid_idx, 'test': test_idx}
| https://stackoverflow.com/questions/71123148/ |
Extracting weights from SGD algorithm | So I am implementing SGD for a binary classification problem. There are 2 classes of points and I want to plot the decision boundary but I'm not sure how to extract the weights from the code to plot it.
Here is the code:
def train_model(train_dl, model):
# define the optimization
criterion = nn.BCELoss(reduction='none')
optimizer = torch.optim.SGD(net.parameters(), lr=0.1)
# enumerate epochs
for epoch in range(10):
# enumerate mini batches
for i, (inputs, targets) in enumerate(train_dl):
# clear the gradients
optimizer.zero_grad()
# compute the model output
yhat = model(inputs)
# calculate loss
loss = criterion(yhat, targets)
# credit assignment
loss.backward()
# update model weights
optimizer.step()
Any help would be much appreciated! Thanks.
| You do not extract this information from the SGD optimizer, this information is part of your model.
What you can do, at test time, is generate a grid of points, compute their prediction using the trained model and then plot the grid points coloring them according to the prediction.
| https://stackoverflow.com/questions/71128146/ |
How to correct when Accuracy equals F1 in Torch Lightning for binary classification? | I understand that with multi-class, F1 (micro) is the same as Accuracy. I aim to test a binary classification in Torch Lightning but always get identical F1, and Accuracy.
To get more detail, I shared my code at GIST, where I used the MUTAG dataset. Below are some important parts I would like to bring up for discussion
The function where I compute Accuracy and F1 (line #28-40)
def evaluate(self, batch, stage=None):
y_hat = self(batch.x, batch.edge_index, batch.batch)
loss = self.criterion(y_hat, batch.y)
preds = torch.argmax(y_hat.softmax(dim=1), dim=1)
acc = accuracy(preds, batch.y)
f1_score = f1(preds, batch.y)
if stage:
self.log(f"{stage}_loss", loss, on_step=True, on_epoch=True, logger=True)
self.log(f"{stage}_acc", acc, on_step=True, on_epoch=True, logger=True)
self.log(f"{stage}_f1", f1_score, on_step=True, on_epoch=True, logger=True)
return loss
To inspect, I put a checkpoint at line #35, and got acc=0.5, f1_score=0.5, while prediction and label respectively are
preds = tensor([1, 1, 1, 0, 1, 1, 1, 1, 0, 0])
batch.y = tensor([1, 0, 1, 1, 0, 1, 0, 1, 1, 0])
Using these values, I run a notebook to double-check with scikit-learn
from sklearn.metrics import f1_score
y_hat = [1, 1, 1, 0, 1, 1, 1, 1, 0, 0]
y = [1, 0, 1, 1, 0, 1, 0, 1, 1, 0]
f1_score(y_hat, y, average='binary') # got 0.6153846153846153
accuracy_score(y_hat, y) # 0.5
I obtained a different result compared to evaluation's code. Besides, I verified again with torch, interestingly, I got a correct result
from torchmetrics.functional import accuracy, f1
import torch
f1(torch.Tensor(y_hat), torch.LongTensor(y)) # tensor(0.6154)
accuracy(torch.Tensor(pred), torch.LongTensor(true)) # tensor(0.5000)
I guess somehow the torch-lightning treats my calculation as a multiclass task. My question is how to correct its behavior?
| You can pass multiclass=False in case your dataset is binary.
This will give you the result which matches the Sklearn F1 score output where average="binary" (default) is passed.
We can set multiclass=False to treat the inputs as binary - which is the same as converting the predictions to float beforehand.
Sklearn results
from sklearn.metrics import f1_score, accuracy_score
y_hat = [1, 1, 1, 0, 1, 1, 1, 1, 0, 0]
y = [1, 0, 1, 1, 0, 1, 0, 1, 1, 0]
print("Binary f1: ", f1_score(y, y_hat, average="binary")) # default
print("Micro f1:", f1_score(y, y_hat, average="micro")) # this is same as accuracy
print("accuracy_score", accuracy_score(y, y_hat))
>>> Binary f1: 0.6153846153846153
>>> Micro f1: 0.5
>>> accuracy_score: 0.5
Pytorch-Lightning Results
import torchmetrics.functional as F
import torch
y_hat = [1, 1, 1, 0, 1, 1, 1, 1, 0, 0]
y = [1, 0, 1, 1, 0, 1, 0, 1, 1, 0]
# torchmetrics
print("Non-Multiclass f1: ", F.f1_score(torch.tensor(y), torch.tensor(y_hat), multiclass=False))
print("Multiclass f1:", F.f1_score(torch.tensor(y), torch.tensor(y_hat))) # same as accuracy
print("accuracy_score", F.accuracy(torch.tensor(y), torch.tensor(y_hat)))
>>> Non-Multiclass f1: tensor(0.6154)
>>> Multiclass f1: tensor(0.5000)
>>> accuracy_score tensor(0.5000)
| https://stackoverflow.com/questions/71131811/ |
One hot Encoding text data in pytorch | I am wondering how to one hot encode text data in pytorch?
For numeric data you could do this
import torch
import torch.functional as F
t = torch.tensor([6,6,7,8,6,1,7], dtype = torch.int64)
one_hot_vector = F.one_hot(x = t, num_classes=9)
print(one_hot_vector.shape)
# Out > torch.Size([7, 9])
But what if you have text data instead
from torchtext.data.utils import get_tokenizer
corpus = ["The cat sat the mat", "The dog ate my homework"]
tokenizer = get_tokenizer("basic_english")
tokens = [tokenizer(doc) for doc in corpus]
But how do I one hot encode this vocab using Pytorch?
With something like Scikit Learn I could do this, is there a similar way to do in pytorch
import spacy
from spacy.lang.en import English
from sklearn.preprocessing import OneHotEncoder
corpus = ["The cat sat the mat", "The dog ate my homework"]
nlp = English()
tokenizer = spacy.tokenizer.Tokenizer(nlp.vocab)
tokens = np.array([[token for token in tokenizer(doc)] for doc in corpus])
one_hot_encoder = OneHotEncoder(sparse = False)
one_hot_encoded = one_hot_encoder.fit_transform(tokens)
| You can do the following:
from typing import Union, Iterable
import torchtext
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
corpus = ["The cat sat the mat", "The dog ate my homework"]
tokenizer = get_tokenizer("basic_english")
tokens = [tokenizer(doc) for doc in corpus]
voc = build_vocab_from_iterator(tokens)
def my_one_hot(voc, keys: Union[str, Iterable]):
if isinstance(keys, str):
keys = [keys]
return F.one_hot(torch.tensor(voc(keys)), num_classes=len(voc))
| https://stackoverflow.com/questions/71146270/ |
How to customize threshold PyTorch | I have trained ResNet50 for binary image classification.
I want to descrease FalseNegatives by reducing threshold value.
How can I do that?
| To decrease the number of false negatives (FN) i.e. increase the recall (since recall = TP / (TP + FN)) you should increase the positive weight (the weight of the occurrence of that class) above 1. For example nn.BCEWithLogitsLoss allows you to provide the pos_weight option:
pos_weight > 1 increases the recall, pos_weight < 1 increases the precision.
For example, if a dataset contains 100 positive and 300 negative examples of a single class, then pos_weight for the class should be equal to 300/100 = 3. The loss would act as if the dataset contains 3*100 = 300 positive examples.
As a side note, the explicit expression for the binary cross entropy with logits (where "with logits" should rather be understood as "from logits") is:
>>> z = torch.sigmoid(q)
>>> loss = -(w_p*p*torch.log(z) + (1-p)*torch.log(1-z))
Above q are the raw logit values while w_p is the weight of the positive instance.
| https://stackoverflow.com/questions/71147379/ |
How can Python execute a line of code, then display an error indicating a crash at the previous line? | I have a script which contains (among other things) these three lines of code:
(line 138) pdb.set_trace()
(line 140) training_start_time = datetime.now()
(line 141) print(f'Network training beginning at {training_start_time}.')
Here is the output I'm seeing:
> c:\vtcproject\yolov5\roadometry_train.py(140)train()
-> training_start_time = datetime.now()
(Pdb) >? continue
Network training beginning at 2022-02-17 11:23:04.340499.
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\Alexander Farley\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\212.5457.59\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Users\Alexander Farley\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\212.5457.59\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/VTCProject/yolov5/roadometry_train.py", line 316, in <module>
main(opt)
File "C:/VTCProject/yolov5/roadometry_train.py", line 304, in main
train(data, cfg, hyp, opt, device)
File "C:/VTCProject/yolov5/roadometry_train.py", line 140, in train
training_start_time = datetime.now()
TypeError: 'module' object is not callable
There are a few things I don't understand here.
How is it possible to see this print output (network training beginning at...) if the code is crashing at datetime.now()?
Why is it that if I manually execute dt = datetime.now() in pdb, then it works fine, but I'm seeing TypeError: 'module' object is not callable if I type continue into pdb or just execute the script without pdb?
The library is imported like this:
(line 13) from datetime import datetime
| Well, this turned out to be somewhat complicated.
I removed code from my script until I found something very strange. Further down my script, below the lines listed above, I was eventually entering a loop which created an instance of tqdm for progress updates:
for epoch in range(start_epoch,
opt.epochs):
pbar = enumerate(train_loader)
pbar = tqdm(pbar, total=nb) # progress bar
When I commented out the tqdm usage in the loop, the TypeError above (at line 140, according to the exception) disappeared.
Paring everything down reinforced that simply calling tqdm caused this error. Searching the tqdm Github issues, I found this:
https://github.com/tqdm/tqdm/issues/611
There is a post in this discussion indicating that TypeError: 'module' object is not callable can occur if one uses:
import tqdm
instead of:
from tqdm import tqdm
So my analysis of what happened here is that tqdm was simply imported as a module, and obviously calling a module as a new object instance isn't going to work. The only confusing part is why the stated line-number is wrong.
The issue raised on Github corresponds closely to my scenario: creating a PyTorch dataloader which is then passed to tqdm.
I was basing my code on the Ultralytics Yolov5 repo and it appears that I changed from tqdm import tqdm to just import tqdm mistakenly. Specifically importing the class and not just the module causes the TypeError to disappear.
It seems that this error has nothing to do with datetime.now(). After commenting this out, I still get a TypeError, but pointing at a different line - now it blames the actual line trying to create a tqdm instance, which is what I would have expected in the first place.
File "C:/VTCProject/yolov5/roadometry_debug.py", line 41, in train
for epoch in range(start_epoch,
TypeError: 'module' object is not callable
In the above output, line 41 of roadometry_debug.py is:
pbar = tqdm(pbar, total=nb) # progress bar
While the line number being blamed appears correct, it seems that the error printout is still printing out the wrong line: for epoch in range....
This explains why pdb allows me to manually execute the next line, and the next print-out: because they're not the issue!
I still don't understand why the first error text blames the wrong line of code, or why the 2nd error text prints out the wrong text but the correct line-number.
Update: it seems that pdb is causing the reported line numbers in the error message to be incorrect.
Here is a minimal example:
import pdb
import tqdm
from datetime import datetime
def train():
pdb.set_trace()
training_start_time = datetime.now()
print(f'Network training beginning at {training_start_time}.')
for epoch in range(0,
10): # epoch ------------------------------------------------------------------
pbar = enumerate([1, 2, 3])
pbar = tqdm(pbar, total=3) # progress bar
def main():
train()
if __name__ == "__main__":
main()
The error message printed out from the above blames the wrong line:
c:\vtcproject\yolov5\roadometry_debug.py(10)train()
-> training_start_time = datetime.now()
(Pdb) >? continue
Network training beginning at 2022-02-17 18:32:13.892776.
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\Alexander Farley\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\212.5457.59\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Users\Alexander Farley\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\212.5457.59\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/VTCProject/yolov5/roadometry_debug.py", line 23, in <module>
main()
File "C:/VTCProject/yolov5/roadometry_debug.py", line 20, in main
train()
File "C:/VTCProject/yolov5/roadometry_debug.py", line 10, in train
training_start_time = datetime.now()
TypeError: 'module' object is not callable
If the call to pdb.set_trace() is commented out, the error message blames the correct line.
Script after commenting out relevant line:
import pdb
import tqdm
from datetime import datetime
def train():
#pdb.set_trace()
training_start_time = datetime.now()
print(f'Network training beginning at {training_start_time}.')
for epoch in range(0,
10): # epoch ------------------------------------------------------------------
pbar = enumerate([1, 2, 3])
pbar = tqdm(pbar, total=3) # progress bar
def main():
train()
if __name__ == "__main__":
main()
Output:
Network training beginning at 2022-02-17 18:33:31.278133.
Traceback (most recent call last):
File "<input>", line 1, in <module>
File "C:\Users\Alexander Farley\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\212.5457.59\plugins\python\helpers\pydev\_pydev_bundle\pydev_umd.py", line 198, in runfile
pydev_imports.execfile(filename, global_vars, local_vars) # execute the script
File "C:\Users\Alexander Farley\AppData\Local\JetBrains\Toolbox\apps\PyCharm-P\ch-0\212.5457.59\plugins\python\helpers\pydev\_pydev_imps\_pydev_execfile.py", line 18, in execfile
exec(compile(contents+"\n", file, 'exec'), glob, loc)
File "C:/VTCProject/yolov5/roadometry_debug.py", line 23, in <module>
main()
File "C:/VTCProject/yolov5/roadometry_debug.py", line 20, in main
train()
File "C:/VTCProject/yolov5/roadometry_debug.py", line 16, in train
pbar = tqdm(pbar, total=3) # progress bar
TypeError: 'module' object is not callable
Notice that the final line-number in the error printout has now changed from 10 (incorrect) to 16 (correct).
It's a reverse heisenbug - only appears if observed!
| https://stackoverflow.com/questions/71163608/ |
Problem in passing value to parser argument | Running my code including this line:
def parse_args():
parser = argparse.ArgumentParser(description='test with parser')
parser.add_argument("--model", type=str, default= "E:\Script\weights\resnext101.pth")
I got this error:
OSError: [Errno 22] Invalid argument: 'E:\\Script\\weights\resnext101.pth'
What is the error for and how I can fix it?
| You aren't passing a path ending with the name resnext101.pth; you are passing a path ending with the name weights␍esnext101.pth, which contains a literal carriage return.
Use a raw string literal to protect all backslashes from expansion, regardless of the character that follows the backslash.
parser.add_argument("--model", type=str, default= r"E:\Script\weights\resnext101.pth")
| https://stackoverflow.com/questions/71174137/ |
No module named 'model' | import numpy as np
import random
import json
import torch
import torch.nn as nn
from torch.utils.data import Dataset, DataLoader
from nltk_utils import bag_of_words, tokenize, stem
from model import NeuralNet
i keep trying to pip install NeuralNet but I keep getting
ModuleNotFoundError: No module named 'model'
I have Neuralnet successfully installed on my pc, and I have have tried what you said I should try and its still not working, can I send you the project on linkedin so you would check it out
| I think you are supposed to import neuralnet by itself:
import neuralnet
or import model from neuralnet:
from neuralnet import model
Since model seems to be a part of the NeuralNet module rather than the other way around.
| https://stackoverflow.com/questions/71180349/ |
Setting Picture Colormap in Tensorboard with PyTorch | I'm pretty new to using Python, PyTorch and Tensorboard - moved from MATLAB due to the lacking automatic differentiation.
I am trying to use the above stated tools since I'm running an optimization problem - simple gradient descent for reconstructing distorted images. No machine learning or deep learning.
The point is that I need to see the images every few iterations of the algorithm, so I was told that tensorboard would be great for it. The only problem is that these images are shown in grayscale and I need to see it in a different colormap. Is there any way to change the colormap in tensorboard?
Thanks!
| You can colorize your tensor shape using tensorflow gather function. Following is a simple script for doing this. You may use other maps rather than 'Spectral':
import matplotlib
import matplotlib.cm
import tensorflow as tf
def colormap(shape):
min = tf.reduce_min(shape)
max = tf.reduce_max(shape)
shape = (shape - min) / (max - min)
gatherIndices = tf.to_int32(tf.round(shape * 255))
map = matplotlib.cm.get_cmap('cividis')
colors = tf.constant(map.colors, dtype=tf.float32)
return tf.gather(colors, gatherIndices)
| https://stackoverflow.com/questions/71184667/ |
Multiplying PyTorch tensors of different shape | I have a torch tensor of shape (32, 100, 50) and another of shape (32,100). Call these A and B respectively. I want to element-wise multiply A and B, such that each of the 50 elements at A[i, j, :] get multiplied by B[i, j], i.e.like multiplying a vector with a scalar. How can I do this via broadcasting rules?
| Just add a singleton dimension to the second tensor, for example:
a = torch.randn([32,100,50])
b = torch.randint(10,[32,100])
b = b[:,:,None] #or .unsqueeze(-1)
c = a * b
assert (c[0,0,0]/a[0,0,0]).int() == b[0,0] and (c[0,0,1]/a[0,0,1]).int() == b[0,0]
The assert on the end is just to prove that adjacent elements in the last dimension is multiplied by the same element of b.
| https://stackoverflow.com/questions/71190458/ |
PyTorch Binary classification not learning | I state that I am new on PyTorch. I wrote this simple program for binary classification. I also created the CSV with two columns of random values, with the "ok" column whose value is 1 only if the other two values are included between two values I decided at the same time. Example:
diam_int,diam_est,ok
37.782,125.507,0
41.278,115.15,1
42.248,115.489,1
29.582,113.141,0
37.428,107.247,0
32.947,123.233,0
37.146,121.537,0
38.537,110.032,0
26.553,113.752,0
27.369,121.144,0
41.632,108.178,0
27.655,111.279,0
29.779,109.268,0
43.695,115.649,1
44.587,116.126,0
It seems to me all done correctly, loss actually lowers (it comes back up slightly after many epochs but I don't think it's a problem), but when I try to test my Net after the training, with a sample batch of the trainset, what I got is always a prediction below 0.5 (so always 0 as estimated output) with a completely random trend.
with torch.no_grad():
pred = net(trainSet[10])
trueVal = ySet[10]
for i in range(len(trueVal)):
print(trueVal[i], pred[i])
Here is my Net class:
import torch
import torch.nn as nn
import torch.nn.functional as F
class Net(nn.Module):
def __init__(self) :
super().__init__()
self.fc1 = nn.Linear(2, 32)
self.fc2 = nn.Linear(32, 64)
self.fc3 = nn.Linear(64, 1)
def forward(self, x):
x = F.relu(self.fc1(x))
x = F.relu(self.fc2(x))
x = self.fc3(x)
return torch.sigmoid(x)
Here is my Main class:
import torch
import torch.optim as optim
import torch.nn.functional as F
import pandas as pd
from net import Net
df = pd.read_csv("test.csv")
y = torch.Tensor(df["ok"])
ySet = torch.split(y, 32)
df.drop(["ok"], axis=1, inplace=True)
data = F.normalize(torch.Tensor(df.values), dim=1)
trainSet = torch.split(data, 32)
net = Net()
optimizer = optim.Adam(net.parameters(), lr=0.001)
lossFunction = torch.nn.BCELoss()
EPOCHS = 300
for epoch in range(EPOCHS):
for i, X in enumerate(trainSet):
optimizer.zero_grad()
output = net(X)
target = ySet[i].reshape(-1, 1)
loss = lossFunction(output, target)
loss.backward()
optimizer.step()
if epoch % 20 == 0:
print(loss)
What am I doing wrong? Thanks in advance for the help
| Your model is underfit. Increasing the number of epochs to (say) 3000 makes the model predict perfectly on the examples you showed.
However after this many epochs the model may be overfit. A good practice is to use validation data (separate the generated data into train and validation sets), and check the validation loss in each epoch. When the validation loss starts increasing you start overfitting and stop the training.
| https://stackoverflow.com/questions/71192650/ |
Convert Keras (TensorFlow) MaxPooling3d to PyTorch MaxPool3d | I'm Trying to convert some Keras (TensorFlow) code to Pytorch, and I'm unable to reproduce the MaxPooling3d in Keras (TensorFlow) as MaxPool3d in PyTorch.
The following code:
import torch
import torch.nn as nn
import tensorflow.keras.layers as layers
import matplotlib.pyplot as plt
kernel_size = (10, 10, 2)
strides = (32, 32, 2)
in_tensor = torch.randn(1, 1, 256, 256, 64)
tf_out = layers.MaxPooling3D(data_format='channels_first', pool_size=kernel_size,
strides=strides, padding='same')(in_tensor.detach().numpy())
pt_out = nn.MaxPool3d(kernel_size=kernel_size, stride=strides)(in_tensor)
fig = plt.figure(figsize=(10, 5))
axs = fig.subplots(1,2)
axs[0].matshow(pt_out[0,0,:,:,0].detach().numpy())
axs[0].set_title('PyTorch')
axs[1].matshow(tf_out.numpy()[0,0,:,:,0])
axs[1].set_title('TensorFlow')
Gives very different results:
What could be the problem?
Is the padding in the PyTorch version inorrect?
| The padding is not the same in both layers, that's why you're not getting the same results.
You set padding='same' in tensorflow MaxPooling3D layer, but there is no padding set in pytorch MaxPool3d layer.
Unfortunately, in Pytorch, there is no option for 'same' padding for MaxPool3d as in tensorflow. So, you will need to manually pad the tensor before passing it to the pytorch MaxPool3d layer.
Try this code:
import torch
import torch.nn as nn
import torch.nn.functional as F
import tensorflow.keras.layers as layers
import matplotlib.pyplot as plt
kernel_size = (10, 10, 2)
strides = (32, 32, 2)
in_tensor = torch.randn(1, 1, 256, 256, 64)
tf_out = layers.MaxPooling3D(data_format='channels_first', pool_size=kernel_size,
strides=strides)(in_tensor.detach().numpy())
in_tensor = F.pad(in_tensor, (0, 0, 0, 0))
pt_out = nn.MaxPool3d(kernel_size=kernel_size, stride=strides)(in_tensor)
fig = plt.figure(figsize=(10, 5))
axs = fig.subplots(1,2)
axs[0].matshow(pt_out[0,0,:,:,0].detach().numpy())
axs[0].set_title('PyTorch')
axs[1].matshow(tf_out.numpy()[0,0,:,:,0])
axs[1].set_title('TensorFlow')
Output:
| https://stackoverflow.com/questions/71194093/ |
error then import pytorch-lightning, azure notebook | i am use microsoft azure (for students) ML servise. Then i work with notebook i can not import pytorch-lightning libary.
!pip install pytorch-lightning==0.9.0
import pytorch_lightning as pl
Here i have error:
ModuleNotFoundError Traceback (most recent call last)
Input In [1], in <module>
----> 2 import pytorch_lightning as pl
ModuleNotFoundError: No module named 'pytorch_lightning'
This is unbearably weird. someone faced such a problem?
| This is rather strange but could be related to that your installation is in another location, so let's:
try where is PL installed with find -name "lightning"
also, check what is the loaded package locations python -c "import sys; print(sys.path)"
I guess that the problem will be in What's the difference between dist-packages and site-packages?
| https://stackoverflow.com/questions/71195222/ |
Having problems with Pandas when storing the results of a CNN | I have a CNN that runs well, but when I'm trying to store the error, loss and accuracy training and validation with Pandas, for some reason the Data Frame that I created has more rows than necesary (173 to be exact) and it looks like it trains more than I ask to, but while the CNN is training and validating the results it gives are according to what I ask of it. I will lay all of my code here by parts.
This is how I define my neural network
class Network(nn.Module):
def __init__(self, p=0.1):
super().__init__()
# define layers
self.conv1 = nn.Conv2d(in_channels=1, out_channels=6, kernel_size=5)
self.conv2 = nn.Conv2d(in_channels=6, out_channels=12, kernel_size=5)
self.fc1 = nn.Linear(in_features=12 * 4 * 4, out_features=120)
self.fc2 = nn.Linear(in_features=120, out_features=60)
self.out = nn.Linear(in_features=60, out_features=10)
self.dropout = nn.Dropout(p)
# define forward function
def forward(self, t):
# conv 1
t = self.conv1(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# conv 2
t = self.conv2(t)
t = F.relu(t)
t = F.max_pool2d(t, kernel_size=2, stride=2)
# fc1
t = t.reshape(-1, 12 * 4 * 4)
t = self.fc1(t)
t = self.dropout(t)
t = F.relu(t)
# fc2
t = self.fc2(t)
t = self.dropout(t)
t = F.relu(t)
# output
t = self.out(t)
# don't need softmax here since we'll use cross-entropy as activation.
return t
I call my network as my model and move it to the CPU
model = Network().to(device)
This is how I define my test and training loops
def train_loop(dataloader, model, loss_fn, optimizer):
size = len(dataloader.dataset)
for batch, (X, y) in enumerate(dataloader):
pred = model(X)
loss = loss_fn(pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if batch % 100 == 0:
loss, current = loss.item(), batch * len(X)
def test_loop(dataloader,model,loss_fn):
num_samples = 0
num_batches = 0
avrg_loss = 0
frac_correct = 0
model.eval()
model = model.to(device)
with torch.no_grad():
for X,y in dataloader:
X = X.to(device)
y = y.to(device)
pred = model(X)
num_batches += 1
avrg_loss += loss_fn(pred,y).item()
num_samples += y.size(0)
frac_correct += (pred.argmax(1)==y).type(torch.float).sum().item()
avrg_loss /= num_batches
frac_correct /= num_samples
return avrg_loss,frac_correct
And here is where I call my CNN and start training it
learning_rate = 1e-3
batch_size = 1000
num_epochs = 100
num_k = 1
n=10
dropouts=[0.1,0.3,0.5]
loss_fn = nn.CrossEntropyLoss()
df = pd.DataFrame()
for p in dropouts:
model = Network(p)
train_dataloader = DataLoader(train_dataset,batch_size=batch_size)
valid_dataloader = DataLoader(test_dataset,batch_size=batch_size)
optimizer = torch.optim.Adam(model.parameters(),lr=learning_rate,eps=1e-08,weight_decay=0,amsgrad=False)
min_valid_loss = float("inf")
for epoch in range(num_epochs):
train_loop(train_dataloader,model,loss_fn,optimizer)
train_loss,train_accu = test_loop(train_dataloader,model,loss_fn)
valid_loss,valid_accu = test_loop(valid_dataloader,model,loss_fn)
print(f"n={n} p={p} epoch={epoch} train_loss={train_loss} train_accu={train_accu} valid_loss={valid_loss} valid_accu={valid_accu}")
df = df.append({"n":n,
"p":p,
"epoch":epoch,
"train_loss":train_loss,
"train_accu":train_accu,
"valid_loss":valid_loss,
"valid_accu":valid_accu}
,ignore_index=True)
json_fname = "simulation-results-"+datetime.datetime.now().strftime("%Y-%m-%d-%H-%M-%S")+".json"
df.to_json(json_fname)
if COLAB:
files.download(json_fname)
The results I get while is training look like this
n=10 p=0.1 epoch=0 train_loss=0.8500287334124247 train_accu=0.6877666666666666 valid_loss=0.864807802438736 valid_accu=0.684
so I can sort of infere that it's doing a good job. But when I ask Pandas ifnormation about the DataFrame df with df.info() I get this
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 474 entries, 0 to 473
Data columns (total 8 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 n 425 non-null float64
1 k 174 non-null float64
2 epoch 474 non-null int64
3 train_loss 474 non-null float64
4 train_accu 474 non-null float64
5 valid_loss 474 non-null float64
6 valid_accu 474 non-null float64
7 p 300 non-null float64
dtypes: float64(7), int64(1)
memory usage: 29.8 KB
So Pandas is,for some reason, adding 173 rows with information that makes no sense (for instance my dropout p is NaN in those 173 rows) and I don't know what's going on.
| Hmm, you should try wrapping your dictionary with brackets like so,
data = {'n': n, 'n': p, 'epoch': epoch} # etc...
pd.DataFrame([data])
If this doesn't work, you should consider converting your dictionary to a pandas dataframe using this function.
data = {'n': n, 'p': p, 'epoch': epoch} # etc...
pd.DataFrame.from_dict(data)
You can also specify your own columns to make the dataframe more readable.
data = {'n': n, 'p': p, 'epoch': epoch} # etc...
columns = ['N', 'P', 'Epoch'] # etc...
pd.DataFrame.from_dict(data, columns=columns)
Obviously, the column size must match the data.
| https://stackoverflow.com/questions/71199665/ |
pytorch reduce_op warning message despite not calling it | I'm constantly receiving a warning message per below; despite not calling the pytorch reduce_op anywhere.
C:\Users\cocoj\.conda\envs\py39\lib\site-packages\torch\distributed\distributed_c10d.py:170: UserWarning: torch.distributed.reduce_op is deprecated, please use torch.distributed.ReduceOp instead
warnings.warn(
I have found the below link; however not clear what the OP suggested as solution.
https://github.com/ucbrise/flor/issues/57
my pytorch is most uptodate per conda list:
pytorch 1.10.0 py3.9_cuda11.3_cudnn8_0 pytorch
| I am also not clear on what they meant, but since they were saying that it's safe to ignore you can try using the warnings module to ignore the message like so:
import warnings
warnings.filterwarnings("ignore", message="torch.distributed.reduce_op is deprecated")
Note that it will ignore anything containing the string in the 'message' argument, so use with caution if you don't enter the full error message.
You can also read this question, which has a similar solution.
| https://stackoverflow.com/questions/71205404/ |
RuntimeError: Found dtype Double but expected Float - Pytorch RL | I am trying to get an actor critic variant of the pendulum running, but I seem to be running into a particular problem.
RuntimeError: Found dtype Double but expected Float
I saw this had come up multiple times before so I have been through those and have attempted to change the data types of my loss (kept in comments) but it is still not working. Could anyone point out how to resolve this so that I can learn from it?
Full code below
import gym, os
import numpy as np
from itertools import count
import torch
import torch.nn as nn
import torch.optim as optim
import torch.nn.functional as F
from torch.distributions import Normal
from collections import namedtuple
SavedAction = namedtuple('SavedAction', ['log_prob', 'value'])
LOG_SIG_MAX = 2
LOG_SIG_MIN = -20
class ActorCritic(nn.Module):
"""
Implementing both heads of the actor critic model
"""
def __init__(self, state_space, action_space):
super(ActorCritic, self).__init__()
self.state_space = state_space
self.action_space = action_space
# HL 1
self.linear1 = nn.Linear(self.state_space, 128)
# HL 2
self.linear2 = nn.Linear(128, 256)
# Outputs
self.critic_head = nn.Linear(256, 1)
self.action_mean = nn.Linear(256, self.action_space)
self.action_std = nn.Linear(256, self.action_space)
# Saving
self.saved_actions = []
self.rewards = []
# Optimizer
self.optimizer = optim.Adam(self.parameters(), lr = 1e-3)
self.eps = np.finfo(np.float32).eps.item()
def forward(self, state):
"""
Forward pass for both actor and critic
"""
# State to Layer 1
l1_output = F.relu(self.linear1(state))
# Layer 1 to Layer 2
l2_output = F.relu(self.linear2(l1_output))
# Layer 2 to Action
mean = self.action_mean(l2_output)
std = self.action_std(l2_output)
std = torch.clamp(std, min=LOG_SIG_MIN, max = LOG_SIG_MAX)
std = std.exp()
# Layer 2 to Value
value_est = self.critic_head(l2_output)
return value_est, mean, std
def select_action(self,state):
state = torch.from_numpy(state).float().unsqueeze(0)
value_est, mean, std = self.forward(state)
value_est = value_est.reshape(-1)
# Make prob Normal dist
dist = Normal(mean, std)
action = dist.sample()
action = torch.tanh(action)
ln_prob = dist.log_prob(action)
ln_prob = ln_prob.sum()
self.saved_actions.append(SavedAction(ln_prob, value_est))
action = action.numpy()
return action[0]
def compute_returns(self, gamma): # This is the error causing code
"""
Calculate losses and do backprop
"""
R = 0
saved_actions = self.saved_actions
policy_losses = []
value_losses = []
returns = []
for r in self.rewards[::-1]:
# Discount value
R = r + gamma*R
returns.insert(0,R)
returns = torch.tensor(returns)
returns = (returns - returns.mean())/(returns.std()+self.eps)
for (log_prob, value), R in zip(saved_actions, returns):
advantage = R - value.item()
advantage = advantage.type(torch.FloatTensor)
policy_losses.append(-log_prob*advantage)
value_losses.append(F.mse_loss(value, torch.tensor([R])))
self.optimizer.zero_grad()
loss = torch.stack(policy_losses).sum() + torch.stack(value_losses).sum()
loss = loss.type(torch.FloatTensor)
loss.backward()
self.optimizer.step()
del self.rewards[:]
del self.saved_actions[:]
env = gym.make("Pendulum-v0")
state_space = env.observation_space.shape[0]
action_space = env.action_space.shape[0]
# Train Expert AC
model = ActorCritic(state_space, action_space)
train = True
if train == True:
# Main loop
window = 50
reward_history = []
for ep in count():
state = env.reset()
ep_reward = 0
for t in range(1,1000):
if ep%50 == 0:
env.render()
action = model.select_action(state)
state, reward, done, _ = env.step(action)
model.rewards.append(reward)
ep_reward += reward
if done:
break
print(reward)
model.compute_returns(0.99) # Error begins here
reward_history.append(ep_reward)
# Result information
if ep % 50 == 0:
mean = np.mean(reward_history[-window:])
print(f"Episode: {ep} Last Reward: {ep_reward} Rolling Mean: {mean}")
if np.mean(reward_history[-100:])>199:
print(f"Environment solved at episode {ep}, average run length > 200")
break
Complete error log, some elements redacted for privacy. Originally the loop actor critic and main loop were in separate files. Comments added to appropriate error causing lines.
Traceback (most recent call last):
File "pendulum.py", line 59, in <module>
model.compute_returns(0.99)
File "/home/x/Software/git/x/x/solvers/actorcritic_cont.py", line 121, in compute_returns
loss.backward()
File "/home/x/.local/lib/python3.8/site-packages/torch/_tensor.py", line 255, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
File "/home/x/.local/lib/python3.8/site-packages/torch/autograd/__init__.py", line 147, in backward
Variable._execution_engine.run_backward(
RuntimeError: Found dtype Double but expected Float
| Answering here in case anyone has similar issues in the future.
The output of the reward in OpenAI Gym Pendulum-v0 is a double, so when you compute the return over the episode you need to change that to a float tensor.
I did this just by:
returns = torch.tensor(returns)
returns = (returns - returns.mean())/(returns.std()+self.eps)
returns = returns.type(torch.FloatTensor)
| https://stackoverflow.com/questions/71224852/ |
How to serve a model in sagemaker? | based on documentation provided here , https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#model-directory-structure, the model file saved from training is model.pth. I also read that it can be .pt extension or even bin extension. I have seen a example of pytorch_model.bin, but when i tried to serve the model with the pytorch_model.bin, it warns me that .pt or .pth file needs to exist. has anyone run into this?
| Interesting question.
I'm assuming you're trying to use the PyTorch container from SageMaker in what we call "script mode" - where you just provide the .py entrypoint.
Have you tried to define a model_fn() function, where you specify how to load your model? The documentation talks about this here.
More details:
Before a model can be served, it must be loaded. The SageMaker PyTorch model server loads your model by invoking a model_fn function that you must provide in your script when you are not using Elastic Inference.
import torch
import os
import YOUR_MODEL_DEFINITION
def model_fn(model_dir):
model = YOUR_MODEL_DEFINITION()
with open(os.path.join(model_dir, 'YOUR-MODEL-FILE-HERE'), 'rb') as f:
model.load_state_dict(torch.load(f))
return model
Let me know if this works!
| https://stackoverflow.com/questions/71230870/ |
Preparing CSV file for neural network machine learning Python | I'm taking a course about machine learning in my undergrad studies and I have a problem where I don't know to load a CSV file into Dataloader then test it, can someone guide me through the process?
you can download the CSV files from this link if you wish https://ufile.io/f/abdd9
Here is the code
import tensorflow as tf
from torch.utils.data import DataLoader
import numpy as np
import pandas as pd
import torch
import torchvision
import matplotlib.pyplot as plt
from time import time
from torchvision import datasets, transforms
from torch import nn, optim
train_data1 = pd.read_csv(("C:/Users/HP/OneDrive/سطح المكتب/KFUPM/TERM 212/EE485/Exp3/mnist_train.csv")
test_data1 = pd.read_csv("C:/Users/HP/OneDrive/سطح المكتب/KFUPM/TERM 212/EE485/Exp3/mnist_test.csv")
dtype = torch.float32
torch_tensor1 = torch.tensor(train_data1.values,dtype = dtype)
torch_tensor2 = torch.tensor(test_data1.values,dtype = dtype )
trainloader=DataLoader(torch_tensor1, batch_size=64, shuffle=True)
testloader =DataLoader(torch_tensor2, batch_size=64, shuffle=True)
then when i try to run this line of code i get an error
dataiter = iter(trainloader)
images, labels = dataiter.next()
print(images.shape)
print(labels.shape)
which is
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-234-afd0555e962e> in <module>
1 dataiter = iter(trainloader)
----> 2 images, labels = dataiter.next()
3
4 print(images.shape)
5 print(labels.shape)
ValueError: too many values to unpack (expected 2)
| To do it properly with a Dataset and a Dataloader, you need to create a custom dataset:
import pandas as pd
from torch.utils.data import Dataset
class CustomMnistDataset(Dataset):
def __init__(self, csv_file):
data = pd.read_csv(csv_file)
self.labels = np.array(data["label"])
self.images = np.array(data.iloc[:, 1:])
def __len__(self):
return len(self.labels)
def __getitem__(self, idx):
return self.images[idx], self.labels[idx]
Then use it to create your dataloader:
from torch.utils.data import DataLoader
test_dataset = CustomMnistDataset("mnist_test.csv")
test_dataloader = DataLoader(test_dataset, batch_size=64, shuffle=True)
image_batch, label_batch = next(iter(test_dataloader))
This way you get a batch of 64 in the right Pytorch Tensor format for your training.
As I said in my comment, for MNIST it is an overkill as you can load it directly from Pytorch. You may need to flatten it though.
from torchvision import datasets
training_data = datasets.MNIST(
root="data",
train=True,
download=True,
transform=ToTensor()
)
EDIT: If you want to use the dataset already provided in Pytorch in a flatten way, you can to this. Then the custom dataset is maybe simpler afterall.
from torchvision import datasets
from torchvision.transforms import ToTensor
import matplotlib.pyplot as plt
training_data = datasets.MNIST(
root="data",
train=True,
download=True,
transform=lambda x: torch.Tensor(np.array(x).reshape(len(np.array(x))**2))
)
train_dataloader = DataLoader(training_data, batch_size=64, shuffle=True)
| https://stackoverflow.com/questions/71233673/ |
Generating histogram feature of 2D tensor from 3D Tensor feature set | I have a 3D tensor of dimensions (3,4 7) where each element in 2-dim(4) has 7 attributes.
What I want is to take the 4th attribute of all 4 elements and to calculate the histogram having 3 hist values and store those values only. And ending up with a 2D tensor of shape (3,4). I have a small toy example for the task that I am working on. My solution ends up with a Tensor which has shape (1,3). Any hint or guidance will be appreciated.
import torch
torch.manual_seed(1)
feature = torch.randint(1, 50, (3, 4,7))
feature.type(torch.FloatTensor)
attrbute_val = feature[:,:,3:4]
print(attrbute_val.shape)
print(attrbute_val)
histogram_feature = torch.histc(torch.tensor(attrbute_val,dtype=torch.float32), bins=3, min=1, max=50)
print("histogram_feature",histogram_feature)
| import torch
torch.manual_seed(1)
bins = 3
feature = torch.randint(1, 50, (3, 4,7))
attrbute_val = feature[:,:,3].float() # read all 4 elements in the 2nd dimension
# and the fourth element in the 3rd dimension.
final_tensor = torch.empty((bins,bins))
tuple_rows = torch.tensor_split(attrbute_val, 3, dim=0)
for i,row in enumerate(tuple_rows):
final_tensor[i] = torch.histc(row, bins=bins, min=1, max=50)
plt.bar(range(bins),final_tensor[i],align='center',color=['forestgreen'])
plt.show()
#final_tensor = tensor([[3., 0., 1.],
# [4., 0., 0.],
# [0., 2., 2.]])
| https://stackoverflow.com/questions/71239735/ |
How to create unnamed PyTorch parameters in state dict? | I am trying to load a model checkpoint (.ckpt file) for transfer learning. I do not have the model's source code, so I am trying to recreate it with PyTorch, like this:
import torch
import torch.nn as nn
import torch.nn.functional as F
class IngrDetNet(nn.Module):
def __init__(self):
super(IngrDetNet, self).__init__()
self.fc1 = nn.Linear(n_ingr, 1024)
self.fc2= nn.Linear(1024, 512)
self.fc3 = nn.Linear(512, 256)
self.fc4 = nn.Linear(256, n_ingr)
def forward(self, x):
x = self.fc1(x)
x = F.leaky_relu(x, 0.2)
x = self.fc2(x)
x = F.leaky_relu(x, 0.2)
x = self.fc3(x)
x = F.leaky_relu(x, 0.2)
x = self.fc4(x)
ingrs = F.sigmoid(x)
return ingrs
# Create a basic model instance
model = IngrDetNet()
model = torch.nn.DataParallel(model)
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)
The syntax is based on this PyTorch tutorial.
Then, I am trying to load in the checkpoint's state dict, based on this PyTorch tutorial:
model_path = Path('../Models/PITA/medr2idalle41.ckpt')
model.load_state_dict(torch.load(model_path, map_location=map_loc)['weights_id'])
But I get a key error mismatch:
Error(s) in loading state_dict for DataParallel:
Missing key(s) in state_dict: "module.fc1.weight", "module.fc1.bias", "module.fc2.weight", "module.fc2.bias", "module.fc3.weight", "module.fc3.bias", "module.fc4.weight", "module.fc4.bias".
Unexpected key(s) in state_dict: "module.model.0.weight", "module.model.0.bias", "module.model.2.weight", "module.model.2.bias", "module.model.4.weight", "module.model.4.bias".
The checkpoint state dict contains indexed keys such as module.model.0.weight, whereas my architecture contains named parameters such as module.fc1.weight. How do I generate layers in such a way that my parameters are not named but indexed?
| The reason why there is a full mismatch of the keys is that you are using the nn.DataParallel module utility. This means it will wrap your original parent model under a wrapper "model" nn.Module. In other words:
>>> model = IngrDetNet() # model is a IngrDetNet
>>> model = torch.nn.DataParallel(model) # model.model is a IngrDetNet
This in turn means your initialized model ends up with a prefixed "model." in its state dict keys.
You can fix this effect by changing the keys yourself before applying them on the model. A dict comprehension should do:
>>> state = torch.load(model_path, map_location=map_loc)
>>> state = {f'model.{k}': v for k, v in state['weights_id'].items()}
>>> model.load_state_dict(state)
| https://stackoverflow.com/questions/71240311/ |
how to load two dataset images simultaneously for train two streams(Pytorch) | i need load identical two dataset suppose one dataset has RGB images and another dataset contain same image with different processed(grey images) with same order same size,
datasetA=[1.jpg,2.jpg,..........n.jpg] // RGB
datasetA=[g1.jpg,g2.jpg,..........gn.jpg] //grey
so I need to feed the same order images to two independent networks using DataLoader with random_split, so how to use
rgb = datasets.ImageFolder(rgb images)
grey = datasets.ImageFolder(gray images)
train_data1, test_data = random_split(rgb, [train_data, test_data])
train_data2, test_data = random_split(grey, [train_data, test_data])
train_loader1 = DataLoader(train_data1, batch_size=batch_size, shuffle=True)
train_loader2 = DataLoader(train_data2, batch_size=batch_size, shuffle=True)
so need to load same order images touple like (1.jpg,g1.jpg) for train both network independantly
and how to use
trainiter1 = iter(train_loader1)
features, labels = next(trainiter)
please explain process
| I think he easiest way to go about this is to construct a custom Dataset that handles both:
class JointImageDataset(torch.utils.data.Dataset):
def __init__(self, args_rgb_dict, args_grey_dict):
# construct the two individual datasets
self.rgb_dataset = ImageFolder(**args_rgb_dict)
self.grey_dataset = ImageFolder(**args_grey_dict)
def __len__(self):
return min(len(self.rgb_dataset), len(selg.grey_dataset))
def __getitem__(self, index):
rgb_x, rgb_y = self.rgb_dataset[index]
grey_x, grey_y = self.grey_dataset[index]
return rgb_x, grey_x, rgb_y, grey_y
Now you can construct a single DataLoader from the JoindImageDataset and iterate over the joint batches:
joint_data = JoindImageDataset(...)
train_loader = DataLoader(joint_data, batch_size=...)
for rgb_batch, grey_batch, rgb_ys, grey_ys in train_loader:
# do your stuff here...
| https://stackoverflow.com/questions/71247325/ |
when use conv and deconv, the out put shape does not math(The input image's weight is odd) | such as the input shape=[1,64,12,60,33]
when i use
nn.Conv3d(in_channels=128, out_channels=64, kernel_size=(3, 3, 3), stride=2, padding=1)
the out put shape =[1,64,6,30,17]
after that i want to let the output return to [1,64,12,60,33]
but when i use
nn.ConvTranspose3d(in_channels=128, out_channels=64, kernel_size=(3, 3, 3), stride=2, padding=1,output_padding=1)
the out put become to [1, 64, 12, 60, 34] that is not i want.
how can i fixed this problem? i mean i want the nextwork doesn't matter the input's shape(of course i don't use dense layer, just conv and deconv)
for example:
input = torch.randn((1,64,12,60,33))
C3d=torch.nn.Conv3d(64,64,kernel_size=(3,3,3),stride=2 ,padding=1)
output_conv = C3d(input)#shape==[1,64,6,30,17]
de_C3d = torch.nn.ConvTranspose3d(64,64,(3,3,3),stride=2,padding=1)
output_deconv=de_C3d(out_conv) #shape = [1,64,11,59,33]
i just want the output_deconv.shape equal to input
| If you're dealing with tensors of arbitrary shapes, this can be difficult. If they're fixed you can add ad hoc fixes which should solve your problem. One way is to utilise the fact that you can pass tuples to the arguments padding and output_padding, which will work in your case:
input = torch.randn((1,64,12,60,33))
C3d=torch.nn.Conv3d(64,64,kernel_size=(3,3,3),stride=2 ,padding=1)
output_conv = C3d(input) #shape==[1,64,6,30,17]
de_C3d = torch.nn.ConvTranspose3d(64,64,(3,3,3),stride=2,padding=1,output_padding=(1,1,0))
output_de=de_C3d(out_conv) #shape = [1,64,12,60,33]
You could also pad and then crop, which is commonly done in UNet architectures:
de_C3d = torch.nn.ConvTranspose3d(64,64,(3,3,3),stride=2,padding=0)
output_deconv=de_C3d(out_conv) #shape = [1,64,13,61,35]
output_deconv = output_deconv[:,:,:input.shape[2],:input.shape[3],:input.shape[4]]
I guess one way to fix this is to add different padding to the inputs depending on whether they're odd or even:
de_C3d = torch.nn.ConvTranspose3d(64,64,(3,3,3),stride=2,padding=1,
output_padding=tuple([(i+1)%2 for i in input.shape[2:]]))
output_deconv=de_C3d(out_conv) #shape = [1,64,12,60,33]
| https://stackoverflow.com/questions/71247537/ |
How to show wandb training progress from run folder | After training neural networks with wandb as the logger, I received a link to show the training results and a folder named "run-...", I assume that is the logging of the training process. Now I don't have that link, how to show the wandb training process from run folder?
| The run folder name is constructed as run-<datetime>-<id>.
You can find the logs on the UI platform as long as you haven't yet deleted it online. I'm not sure it is yet possible to resync the local copy to the cloud.
One way to find your run across projects is to go on your profile page: https://wandb.ai/<username> and type the run's id in the search bar.
| https://stackoverflow.com/questions/71257152/ |
Select on second dimension on a 3D pytorch tensor with an array of indexes | I am kind of new with numpy and torch and I am struggling to understand what to me seems the most basic operations.
For instance, given this tensor:
A = tensor([[[6, 3, 8, 3],
[1, 0, 9, 9]],
[[4, 9, 4, 1],
[8, 1, 3, 5]],
[[9, 7, 5, 6],
[3, 7, 8, 1]]])
And this other tensor:
B = tensor([1, 0, 1])
I would like to use B as indexes for A so that I get a 3 by 4 tensor that looks like this:
[[1, 0, 9, 9],
[4, 9, 4, 1],
[3, 7, 8, 1]]
Thanks!
| Ok, my mistake was to assume this:
A[:, B]
is equal to this:
A[[0, 1, 2], B]
Or more generally the solution I wanted is:
A[range(B.shape[0]), B]
| https://stackoverflow.com/questions/71262004/ |
FileNotFoundError: Entity folder does not exist! in Google Colab | Can anyone help me in sorting out this issue?
When I run these lines in Colab
:param files_name: containing training and validation samples list file.
:param boxes_and_transcripts_folder: gt or ocr result containing transcripts, boxes and box entity type (optional).
:param images_folder: whole images file folder
:param entities_folder: exactly entity type and entity value of documents, containing json format file
:param iob_tagging_type: 'box_level', 'document_level', 'box_and_within_box_level'
:param resized_image_size: resize whole image size, (w, h)
:param keep_ratio: TODO implement this parames
:param ignore_error:
:param training: True for train and validation mode, False for test mode. True will also load labels, and files_name and entities_file must be set.
'''
class PICKDataset(Dataset):
def __init__(self, files_name: str = None,
boxes_and_transcripts_folder: str = 'boxes_and_transcripts',
images_folder: str = 'images',
entities_folder: str = 'entities',
iob_tagging_type: str = 'box_and_within_box_level',
resized_image_size: Tuple[int, int] = (480, 960),
keep_ratio: bool = True,
ignore_error: bool = False,
training: bool = True
):
super().__init__()
self._image_ext = None
self._ann_ext = None
self.iob_tagging_type = iob_tagging_type
self.keep_ratio = keep_ratio
self.ignore_error = ignore_error
self.training = training
assert resized_image_size and len(resized_image_size) == 2, 'resized image size not be set.'
self.resized_image_size = tuple(resized_image_size) # (w, h)
if self.training: # used for train and validation mode
self.files_name = Path(files_name)
self.data_root = self.files_name.parent
self.boxes_and_transcripts_folder: Path = self.data_root.joinpath(boxes_and_transcripts_folder)
self.images_folder: Path = self.data_root.joinpath(images_folder)
self.entities_folder: Path = self.data_root.joinpath(entities_folder)
if self.iob_tagging_type != 'box_level':
if not self.entities_folder.exists():
raise FileNotFoundError('Entity folder is not exist!')
I get this error
RuntimeError: module compiled against API version 0xe but this version of numpy is 0xd
[2022-02-26 13:17:11,933 - train - INFO] - Distributed GPU training model start...
[2022-02-26 13:17:11,933 - train - INFO] - [Process 306] Initializing process group with: {'MASTER_ADDR': '127.0.0.1', 'MASTER_PORT': '29500', 'RANK': '0', 'WORLD_SIZE': '1'}
[2022-02-26 13:17:11,934 - train - INFO] - [Process 306] world_size = 1, rank = 0, backend=nccl
Traceback (most recent call last):
File "train.py", line 162, in <module>
entry_point(config)
File "train.py", line 126, in entry_point
main(config, local_master, logger if local_master else None)
File "train.py", line 34, in main
train_dataset = config.init_obj('train_dataset', pick_dataset_module)
File "/content/PICK-pytorch/parse_config.py", line 105, in init_obj
return getattr(module, module_name)(*args, **module_args)
File "/content/PICK-pytorch/data_utils/pick_dataset.py", line 66, in __init__
raise FileNotFoundError('Entity folder is not exist!')
FileNotFoundError: Entity folder is not exist!
Traceback (most recent call last):
File "/usr/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/usr/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 263, in <module>
main()
File "/usr/local/lib/python3.7/dist-packages/torch/distributed/launch.py", line 259, in main
cmd=cmd)
subprocess.CalledProcessError: Command '['/usr/bin/python3', '-u', 'train.py', '--local_rank=0', '-c', 'config.json', '-d', '0', '--local_world_size', '1']' returned non-zero exit status 1.
What I have tried
Changing inputfilepath in PICKDataset, remving lines which rise error.
Complete Notebook
| There is error in https://github.com/wenwenyu/PICK-pytorch/blob/master/config.json file. you have to change the path of data as per your working directory. Check line number 61 to 64 and 73 to 76.
| https://stackoverflow.com/questions/71277138/ |
Cant get the right yolor pre trained weights in YOLOR | I'm training a custom dataset in yolor. I successfully run its once but after some time, I cant manage to do it very well.
The first error I noticed is in the training part:
Traceback (most recent call last): File "train.py", line 537, in <module>
train(hyp, opt, device, tb_writer, wandb) File "train.py", line 80, in train
ckpt = torch.load(weights, map_location=device) # load checkpoint File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 595, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args) File "/usr/local/lib/python3.7/dist-packages/torch/serialization.py", line 764, in _legacy_load
magic_number = pickle_module.load(f, **pickle_load_args)
_pickle.UnpicklingError: invalid load key, '<'.
then i traced it and found that the pre trained weights didnt load correctly with this code:
%cd /content/yolor
!bash scripts/get_pretrain.sh
ann give me this error:
/content/yolor
awk: cannot open ./cookie (No such file or directory)
rm: cannot remove './cookie': No such file or directory
and that's the first and main thing I noticed comparing to what I've done, it should load the weight in there.
its just giving me a pre-trained files with some HTML code.
im using goolge colab btw
| For me, the easiest way was to download data on my laptop, then upload them and replace current HTML weights with the correct ones.
You will find two weights link to google drive in the get_pretrain.sh file
yolor_p6.pt: https://drive.google.com/uc?export=download&id=1Tdn3yqpZ79X7R1Ql0zNlNScB1Dv9Fp76
yolor_w6.pt: https://drive.google.com/uc?export=download&id=1UflcHlN5ERPdhahMivQYCbWWw7d2wY7U
curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=1Tdn3yqpZ79X7R1Ql0zNlNScB1Dv9Fp76" > /dev/null
curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm=awk '/download/ {print $NF}' ./cookie&id=1Tdn3yqpZ79X7R1Ql0zNlNScB1Dv9Fp76" -o yolor_p6.pt
rm ./cookie
curl -c ./cookie -s -L "https://drive.google.com/uc?export=download&id=1UflcHlN5ERPdhahMivQYCbWWw7d2wY7U" > /dev/null
curl -Lb ./cookie "https://drive.google.com/uc?export=download&confirm=awk '/download/ {print $NF}' ./cookie&id=1UflcHlN5ERPdhahMivQYCbWWw7d2wY7U" -o yolor_w6.pt
rm ./cookie
I am not familiar with colab, that's why I used this simple solution.
| https://stackoverflow.com/questions/71278688/ |
How to convert a (M, L) tensor to (N, L) based on counts vector of size (N) where M is sum of counts, using aggregation by adding | So I have a 2D tensor A of shape (M, L) and I want to convert it into B, a (N, L) tensor.
I also have a counts tensor C (N) which has the counts of how many rows belong to which group such that sum(C) = M.
For example :
# shape = (6, 3)
A = torch.tensor([[1, 2, 3],
[4, 5, 6],
[1, 1, 1],
[5, 3, 1],
[5, 7, 1],
[2, 1, 3]])
# counts
C = torch.tensor([2, 1, 3])
# torch.sum(C) == A.shape[0]
# final tensor shape = (3, 3)
B = torch.tensor([[5, 7, 9],
[1, 1, 1],
[12, 11, 5]])
The aggregation of rows is done by element wise addition.
I tried to create a simple function for this as follows :
def convert_to_batch_results(self, results, counts):
knowledge_cumsum = torch.cumsum(results, dim=0) # [M, L]
inds = torch.cumsum(counts, dim=0) - 1 # [N]
knowledge = knowledge_cumsum[inds] # [N, L]
diff = torch.zeros(knowledge.shape) # [N, L]
diff[1:] = knowledge[:-1]
user_knowledge = torch.sub(knowledge, diff) # [N, L]
return user_knowledge
This works when counts vector has all elements non-zero.
But if counts have some 0 elements the summation becomes wrong.
I case counts have 0s, I want the corresponding output rows in B to have value 0s.
This is what I changed :
def convert_to_batch_results(results, counts):
knowledge_cumsum = torch.cumsum(results, dim=0) # [M, L]
inds = torch.cumsum(counts, dim=0) - 1 # [N] # print(inds)
knowledge = knowledge_cumsum[inds] # [N, L]
knowledge = torch.mul(knowledge, torch.sign(counts.unsqueeze(1)))
diff = torch.zeros(knowledge.shape).to('cpu') # [N, L]
diff[1:] = knowledge[:-1]
user_knowledge = torch.sub(knowledge, diff) # [N, L]
user_knowledge = torch.mul(user_knowledge, torch.sign(counts.unsqueeze(1)))
return user_knowledge
And ran it like so :
res = torch.ones(5, 3)
counts = torch.tensor([2, 0, 3])
print(convert_to_batch_results(res, counts))
# Output :
# tensor([[2., 2., 2.],
# [-0., -0., -0.],
# [5., 5., 5.]])
# Expected Output :
# tensor([[2., 2., 2.],
# [-0., -0., -0.],
# [3., 3., 3.]])
I have tried some other things but am unable to get the correct results in case of zeros present in counts. Please help me with the correct way to achieve the desired results.
Just to reiterate, counts can have zeros anywhere multiple times like [0, 3, 4], [0, 3, 0, 0, 1], [4, 0, 6, 0 1], etc.
Edit :
Adding the solution here by modifying my first approach and Shai's answer below.
The problem was with leading zeros in count which were becoming -1 and used as index into cumsum giving us the last row. Fixed this by using a mask.
def convert_to_batch_results(self, A, C):
I = C.cumsum(dim=0)-1
I = torch.mul(A.cumsum(dim=0)[I, :], (I != -1).int().unsqueeze(1)) # here
I = torch.cat((torch.zeros_like(A[:1,:]), I), dim=0)
return I.diff(dim=0)
| You should use torch.cumsum to solve your problem:
Your output is simply the cumulative sum of rows up to the row induces of C. Taking the diff of the cumulative sum from the beginning of the tensor will give you the sum over the intervals you want:
B = torch.cat((torch.zeros_like(A[:1,:]), A.cumsum(dim=0)[C.cumsum(dim=0)-1, :]), dim=0).diff(dim=0)
Works with zeros in C "out-of-the-box".
We can break this one-liner to better understand what is going on:
A.cumsum(dim=0) # cumulative sum of each column of A from the very beginning
C.cumsum(dim=0) # (2, 3, 6) - At what rows we want to look at the cumulative sum: the second row will give us the first row of B. The third one would be the second row of B _but_ including the first one, etc.
diff(dim=0) # subtrat the cumulative rows to get the intervals we want of A into B.
This trick is a simplified 1D version of integral images.
| https://stackoverflow.com/questions/71282460/ |
Finding patterns in time series with PyTorch | I started PyTorch with image recognition. Now I want to test (very basically) with pure NumPy arrays. I struggle with getting the setup to work, so basically I have vectors with values between 0 and 1 (normalized curves). Those vectors are always of length 1500 and I want to find e.g. "high values at the beginning" or "sine wave-like function", "convex", "concave" etc. stuff like that, so just shapes of those curves.
My training set consists of many vectors with their classes; I have chosen 7 classes. The net should be trained to classify a vector into one or more of those 7 classes (not one hot).
I'm struggling with multiple issues, but first my very basic Net
class Net(nn.Module):
def __init__(self, input_dim, hidden_dim, layer_dim, output_dim):
super(Net, self).__init__()
self.hidden_dim = hidden_dim
self.layer_dim = layer_dim
self.rnn = nn.RNN(input_dim, hidden_dim, layer_dim)
self.fc = nn.Linear(self.hidden_dim, output_dim)
def forward(self, x):
h0 = torch.zeros(self.layer_dim, x.size(1), self.hidden_dim).requires_grad_()
out, h0 = self.rnn(x, h0.detach())
out = out[:, -1, :]
out = self.fc(out)
return out
network = Net(1500, 70, 20, 7)
optimizer = optim.SGD(network.parameters(), lr=learning_rate, momentum=momentum)
This is just a copy-paste from an RNN demo. Here is my first issue. Is an RNN the right choice? It is a time series, but then again it is an image recognition problem when plotting the curve.
Now, this here is an attempt to batch the data. The data object contains all training curves together with the correct classifiers.
def train(epoch):
network.train()
network.float()
batching = True
index = 0
# monitor the cummulative loss for an epoch
cummloss = []
# start batching some curves
while batching:
optimizer.zero_grad()
# here I start clustering come curves to a batch and normalize the curves
_input = []
batch_size = min(len(data)-1, index+batch_size_train) - index
for d in data[index:min(len(data)-1, index+batch_size_train)]:
y = np.array(d['data']['y'], dtype='d')
y = np.multiply(y, y.max())
y = y[0:1500]
y = np.pad(y, (0, max(1500-len(y), 0)), 'edge')
if len(_input) == 0:
_input = y
else:
_input = np.vstack((_input, y))
input = torch.from_numpy(_input).float()
input = torch.reshape(input, (1, batch_size, len(y)))
target = np.zeros((1,7))
# the correct classes have indizes, to I create a vector with 1 at the correct locations
for _index in np.array(d['classifier']):
target[0,_index-1] = 1
target = torch.from_numpy(target)
# get the result form the network
output = network(input)
# is this a good loss function?
loss = F.l1_loss(output, target)
loss.backward()
cummloss.append(loss.item())
optimizer.step()
index = index + batch_size_train
if index > len(data):
print(np.mean(cummloss))
batching = False
for e in range(1, n_epochs):
print('Epoch: ' + str(e))
train(0)
The problem I'm facing right now is, the loss doesn't change very little, even with hundreds of epochs.
Are there existing examples of this kind of problem? I didn't find any, just pure png/jpg image recognition. When I convert the curves to png then I have a little issue to train a net, I took densenet and it worked just fine but it seems to be super overkill for this simple task.
|
This is just a copy-paste from an RNN demo. Here is my first issue. Is an RNN the right choice?
In theory what model you choose does not matter as much as "How" you formulate your problem.
But in your case the most obvious limitation you're going to face is your sequence length: 1500. RNN store information across steps and typically runs into trouble over long sequence with vanishing or exploding gradient.
LSTM net have been developed to circumvent this limitations with memory cell, but even then in the case of long sequence it will still be limited by the amount of information stored in the cell.
You could try using a CNN network as well and think of it as an image.
Are there existing examples of this kind of problem?
I don't know but I might have some suggestions : If I understood your problem correctly, you're going from a (1500, 1) input to a (7,1) output, where 6 of the 7 positions are 0 except for the corresponding class where it's 1.
I don't see any activation function, usually when dealing with multi class you don't use the output of the dense layer to compute the loss you apply a normalizing function like softmax and then you can compute the loss.
| https://stackoverflow.com/questions/71285755/ |
Failed to install PyTorch | I tried my best to install Pytorch but each and every time I failed to install it.
Conda version: 4.6.14
I have used Preview(Nightly) and LTS versions to install but for both of times I have faced the same error like Solving environment: | Killed .
Preview(Nightly) command: conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch-nightly
LTS command: conda install pytorch torchvision torchaudio cudatoolkit=10.2 -c pytorch-lts
Faced error is given in the attached file, please check it.
| have you tried installing pytorch into a new environment? problems usually arise when you try to install it into your base environment.
conda create -n (NameOfEnviroment) -c pytorch pytorch torchvision
conda update --all
| https://stackoverflow.com/questions/71298721/ |
Using Focal Loss for imbalanced dataset in PyTorch | I found this implementation of focal loss in GitHub and I am using it for an imbalanced dataset binary classification problem.
# IMPLEMENTATION CREDIT: https://github.com/clcarwin/focal_loss_pytorch
class FocalLoss(nn.Module):
def __init__(self, gamma=0.5, alpha=None, size_average=True):
super(FocalLoss, self).__init__()
self.gamma = gamma
self.alpha = alpha
if isinstance(alpha,(float,int)): self.alpha = torch.Tensor([alpha,1-alpha])
if isinstance(alpha,list): self.alpha = torch.Tensor(alpha)
self.size_average = size_average
def forward(self, input, target):
if input.dim()>2:
input = input.view(input.size(0),input.size(1),-1) # N,C,H,W => N,C,H*W
input = input.transpose(1,2) # N,C,H*W => N,H*W,C
input = input.contiguous().view(-1,input.size(2)) # N,H*W,C => N*H*W,C
target = target.view(-1,1)
logpt = F.log_softmax(input)
logpt = logpt.gather(1,target)
logpt = logpt.view(-1)
pt = Variable(logpt.data.exp())
if self.alpha is not None:
if self.alpha.type()!=input.data.type():
self.alpha = self.alpha.type_as(input.data)
at = self.alpha.gather(0,target.data.view(-1))
logpt = logpt * Variable(at)
loss = -1 * (1-pt)**self.gamma * logpt
if self.size_average: return loss.mean()
else: return loss.sum()
also
gamma=args.gamma
alpha=args.alpha
criterion = FocalLoss(gamma, alpha)
m = nn.Sigmoid()
I use the criterion as follows in train phase:
for i_batch, sample_batched in enumerate(dataloader_train):
#pdb.set_trace()
feats = torch.stack(sample_batched['image'])
labels = torch.as_tensor(sample_batched['label']).cuda()
print('feats shape: ', feats.shape)
print('labels shape: ', labels.shape)
output = model(feats)
loss = criterion(m(output[:,1]-output[:,0]), labels.float())
The error is:
train: True test: False
preparing datasets and dataloaders......
creating models......
=>Epoches 1, learning rate = 0.0010000, previous best = 0.0000
training...
feats shape: torch.Size([64, 419, 512])
labels shape: torch.Size([64])
main_classifier.py:86: UserWarning: Implicit dimension choice for log_softmax has been deprecated. Change the call to include dim=X as an argument.
logpt = F.log_softmax(input)
Traceback (most recent call last):
File "main_classifier.py", line 346, in <module>
loss = criterion(m(output[:,1]-output[:,0]), labels.float())
File "/home/jalal/research/venv/dpcc/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1102, in _call_impl
return forward_call(*input, **kwargs)
File "main_classifier.py", line 87, in forward
logpt = logpt.gather(1,target)
IndexError: Dimension out of range (expected to be in range of [-1, 0], but got 1)
How should I fix this error?
Is this implementation of FocalLoss correct?
| Unlike BCEWithLogitLoss, inputting the same arguments as you would use for CrossEntropyLoss solved the problem:
#loss = criterion(m(output[:,1]-output[:,0]), labels.float())
loss = criterion(output, labels)
Credits to Piotr from NVidia
| https://stackoverflow.com/questions/71300607/ |
Optimising model.parameters and custom learnable parameter together using torch.optim gives non-leaf tensor error | Framework: PyTorch
I am trying to optimise a custom nn.parameter(Temperature) used in softmax calculation along with the model parameters using a single Adam optimiser while model training. But doing so gives the following error:
ValueError: can't optimize a non-leaf Tensor
Here is my custom loss function:
class CrossEntropyLoss2d(torch.nn.Module):
def __init__(self, weight=None):
super().__init__()
self.temperature = torch.nn.Parameter(torch.ones(1, requires_grad=True, device=device))
self.loss = torch.nn.NLLLoss(weight)
self.loss.to(device)
def forward(self, outputs, targets):
T_logits = self.temp_scale(outputs)
return self.loss(torch.nn.functional.log_softmax(T_logits, dim=1), targets)
def temp_scale(self, logits):
temp = self.temperature.unsqueeze(1).expand(logits.size(1), logits.size(2), logits.size(3))
return logits/temp
.
.
.
.
.
.
Here is the part of training code:
criterion = CrossEntropyLoss2d(weight)
params = list(model.parameters()) +list(criterion.temperature)
optimizer = Adam(params, 5e-4, (0.9, 0.999), eps=1e-08, weight_decay=1e-4)
Error:
File "train_my_net_city.py", line 270, in train
optimizer = Adam(params, 5e-4, (0.9, 0.999), eps=1e-08, weight_decay=1e-4)
File "/home/saquib/anaconda3/lib/python3.8/site-packages/torch/optim/adam.py", line 48, in __init__
super(Adam, self).__init__(params, defaults)
File "/home/saquib/anaconda3/lib/python3.8/site-packages/torch/optim/optimizer.py", line 54, in __init__
self.add_param_group(param_group)
File "/home/saquib/anaconda3/lib/python3.8/site-packages/torch/optim/optimizer.py", line 257, in add_param_group
raise ValueError("can't optimize a non-leaf Tensor")
ValueError: can't optimize a non-leaf Tensor
Checking the variable for leaf gives true:
print(criterion.temperature.is_leaf)
True
The error arises due to the criterion.temperature parameter and not due to model.parameters.
| Got it working by doing so:
params = list(model.parameters())
params.append(criterion.temperature)
| https://stackoverflow.com/questions/71305809/ |
Pytorch loss.backward() gives none grad for parameters of Rx, Ry Gate | I'm trying to train parameters params by performing linear Transformation on an input tensor x by matrix multiplying Rx to input followed by Ry matrix to their result. (each matrix Rx and Ry have a parameter params[i] each that define the matrix).
then I calculate loss by mse of y and the predicted output. when I do loss.backward()
Im getting params.grad as None.
import torch
def get_device(gpu_no):
if torch.cuda.is_available():
return torch.device('cuda', gpu_no)
else:
return torch.device('cpu')
device = get_device(0)
params = torch.tensor(([[0.011], [0.012]]), requires_grad=True).to(device).to(torch.cfloat)
x_gate = torch.tensor([[1., 0.], [0., 1.]]).to(device)
y_gate = torch.tensor(([[0, -1j], [1j, 0]])).to(device)
def rx(theta):
# co = torch.cos(theta / 2)
# si = torch.sin(theta / 2)
# Rx_gate = torch.stack([torch.cat([co, -si], dim=-1),
# torch.cat([-si, co], dim=-1)], dim=-2).squeeze(0).to(device).to(torch.cfloat).requires_grad_()
# Rx_gate = torch.exp(-1j * (theta / 2) * x_gate).to(device).to(torch.cfloat).requires_grad_()
Rx_gate = torch.tensor(([[torch.cos(theta/2), -torch.sin(theta/2)],
[-torch.sin(theta/2), torch.cos(theta/2)]]), requires_grad=True).to(device).to(torch.cfloat)
return Rx_gate
def ry(theta):
# co = torch.cos(theta / 2)
# si = torch.sin(theta / 2)
# Ry_gate = torch.stack([torch.cat([co, -si]),
# torch.cat([si, co])], dim=-2).squeeze(0).to(device).to(torch.cfloat).requires_grad_()
# Ry_gate = torch.exp(-1j * (theta / 2) * y_gate).to(device).to(torch.cfloat).requires_grad_()
Ry_gate = torch.tensor(([[torch.cos(theta / 2), -torch.sin(theta / 2)],
[torch.sin(theta / 2), torch.cos(theta / 2)]]), requires_grad=True).to(device).to(torch.cfloat)
return Ry_gate
x = torch.tensor([1., 0.]).to(device).to(torch.cfloat)
y = torch.tensor([0., 1.]).to(device).to(torch.cfloat)
def pred(params):
out = rx(params[0]) @ x
out = ry(params[1]) @ out
return out
print("params :", params)
print("prediction :", pred(params))
loss = torch.pow((y - pred(params)), 2).sum()
print("loss :", loss)
loss.backward()
print("loss grad :", loss.grad)
print("params grad :", params.grad)
my output is
params : tensor([[0.0110+0.j],
[0.0120+0.j]], device='cuda:0', grad_fn=<ToCopyBackward0>)
prediction : tensor([1.0000e+00+0.j, 5.0000e-04+0.j], device='cuda:0',
grad_fn=<MvBackward0>)
loss : tensor(1.9990+1.7485e-07j, device='cuda:0', grad_fn=<SumBackward0>)
loss grad : None
params grad : None
why grad is none even though params has grad_fn=<ToCopyBackward0>.
Also i get this warning:
UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten\src\ATen/core/TensorBody.h:417.)
return self._grad
| Good observation, you indeed have a correct backpropagation of the gradient through the gradient. So why are you getting none when accessing your parameter?
The reason why you can't access the gradient of this parameter is that only leaf tensors have their gradient cached in memory. Here, since params is a copy of a leaf-tensor (you called to twice on it which made that happen), it will not be considered a gradient of the computation graph.
In order to get access to the gradient of that parameter at runtime, you can force the engine to cache and make it accessible outside with a simple call to retain_grad as suggested by the warning message.
params.retain_grad()
| https://stackoverflow.com/questions/71309024/ |
Log metrics with configuation in Pytorch Lightning using w&b | I am using PyTorch Lightning together with w&b and trying associate metrics with a finite set of configurations. In the LightningModule class I have defined the test_step as:
def test_step(self, batch, batch_idx):
x, y_true, config_file = batch
y_pred = self.forward(x)
accuracy = self.accuracy(y_pred, y_true)
self.log("test/accuracy", accuracy)
Assuming (for simplicity) that the batch size is 1, this will log the accuracy for 1 sample and it will be displayed as a chart in the w&b dashboard.
I would like to associate this accuracy with some configuration of the experimental environment. This configuration might include BDP factor, bandwith delay, queue_size, location, etc. I don't want to plot the configurations I just want to be able to filter or group the accuracy by some configuration value.
The only solution I can come up with is to add these configurations as a querystring:
def test_step(self, batch, batch_idx):
x, y_true, config_file = batch
# read values in config file
# ...
y_pred = self.forward(x)
accuracy = self.accuracy(y_pred, y_true)
self.log("test/BDP=2&delay=10ms&queue_size=10&topology=single/accuracy", accuracy)
Is there a better solution for this that integrates my desired functionality of being able to group and filter by values like BDP?
| I work at W&B. You could log your config variables using wandb.config, like so:
wandb.config['my_variable'] = 123
And then you'll be able to filter your charts by whatever config you'd logged. Or am I missing something.
Possibly the save_hyperparameters call might even grab these config values automatically (from the WandbLogger docs here)
class LitModule(LightningModule):
def __init__(self, *args, **kwarg):
self.save_hyperparameters()
| https://stackoverflow.com/questions/71312243/ |
How can I apply NMS (non-maximum suppression) on multiple images from a dataloader efficiently (PyTorch)? | I have the following function defined for non-maximum suppression (NMS) post processing on my predictions.
At the moment, it is defined for a single prediction or output:
from torchvision import transforms as torchtrans
def apply_nms(orig_prediction, iou_thresh=0.3):
# torchvision returns the indices of the bboxes to keep
keep = torchvision.ops.nms(orig_prediction['boxes'], orig_prediction['scores'], iou_thresh)
final_prediction = orig_prediction
final_prediction['boxes'] = final_prediction['boxes'][keep]
final_prediction['scores'] = final_prediction['scores'][keep]
final_prediction['labels'] = final_prediction['labels'][keep]
return final_prediction
where I then apply it to a single image:
cpu_device = torch.device("cpu")
# pick one image from the test set
img, target = valid_dataset[3]
# put the model in evaluation mode
model.to(cpu_device)
model.eval()
with torch.no_grad():
output = model([img])[0]
nms_prediction = apply_nms(output, iou_thresh=0.1)
However, I'm not sure how I can do this efficiently for a whole batch of images from a dataloader:
cpu_device = torch.device("cpu")
model.eval()
with torch.no_grad():
for images, targets in valid_data_loader:
images = list(img.to(device) for img in images)
outputs = model(images)
outputs = [{k: v.to(cpu_device)for k, v in t.items()} for t in outputs]
#DO NMS POST PROCESSING HERE??
What would be the best approach? How can I apply the above defined function for multiple images? Would this be best done in another for loop?
| Have a look at the Generic Trnasform paragraph in the torchivision doc page you can use torchvision.transform.Lambda or work with functional transforms.
Here is an example with Lambda
nms_transform = torchvision.transforms.Lambda(apply_nms)
Then, you can apply the transform with the transform parameter of your dataset (or you can create your custom dataset class, as well):
dset = MyDset(..., transform=torchvision.transforms.Compose([torchvision.transforms.ToTensor(), nms_transform()])
| https://stackoverflow.com/questions/71316130/ |
optimizing multiple loss functions in pytorch | I am training a model with different outputs in PyTorch, and I have four different losses for positions (in meter), rotations (in degree), and velocity, and a boolean value of 0 or 1 that the model has to predict.
AFAIK, there are two ways to define a final loss function here:
one - the naive weighted sum of the losses
two - the defining coefficient for each loss to optimize the final loss.
So, My question is how is better to weigh these losses to obtain the final loss, correctly?
| This is not a question about programming but instead about optimization in a multi-objective setup. The two options you've described come down to the same approach which is a linear combination of the loss term. However, keep in mind there are many other approaches out there with dynamic loss weighting, uncertainty weighting, etc... In practice, the most often used approach is the linear combination where each objective gets a weight that is determined via grid-search or random-search.
You can look up this survey on multi-task learning which showcases some approaches: Multi-Task Learning for Dense Prediction Tasks: A Survey, Vandenhende et al., T-PAMI'20.
This is an active line of research, as such, there is no definite answer to your question.
| https://stackoverflow.com/questions/71317141/ |
Unable to install Torch with pipenv | I tried following this tutorial after not being able to lock with pipenv install torch I am using Linux Mint 20.3 una
pipenv install --extra-index-url https://download.pytorch.org/whl/cu113/ "torch==1.10.1+cu113"
caused this problem after a long 'installing torch...' stage:
Error: An error occurred while installing torch==1.10.1+cu113!
Error text: Looking in indexes: https://download.pytorch.org/whl/cu113/, https://pypi.org/simple
Collecting torch==1.10.1+cu113
Downloading https://download.pytorch.org/whl/cu113/torch-1.10.1%2Bcu113-cp39-cp39-linux_x86_64.whl (1821.5 MB)
✘ Installation Failed
This also did not work.
Installing torch==1.10.2+cu113...
Error: An error occurred while installing torch==1.10.2+cu113!
Error text:
ERROR: Could not find a version that satisfies the requirement torch==1.10.2+cu113 (from versions: 1.7.1, 1.8.0, 1.8.1, 1.9.0, 1.9.1, 1.10.0, 1.10.1, 1.10.2)
ERROR: No matching distribution found for torch==1.10.2+cu113
Edit:
I have tried bu failed:
pipenv install --extra-index-url https://download.pytorch.org/whl/ "torch==1.10.1+cu102"
Installing torch==1.10.1+cu102...
Adding torch to Pipfile's [packages]...
✔ Installation Succeeded
Pipfile.lock (6516c9) out of date, updating to (2de599)...
Locking [dev-packages] dependencies...
Locking [packages] dependencies...
Building requirements...
Resolving dependencies...
✘ Locking Failed!
| I solved it, I don't have those GPUs so I have to use the third command line of the tutorial
pipenv install --extra-index-url https://download.pytorch.org/whl/ "torch==1.10.1+cpu"
Installing torch==1.10.1+cpu...
Adding torch to Pipfile's [packages]...
✔ Installation Succeeded
Pipfile.lock (6516c9) out of date, updating to (3c44bd)...
Locking [dev-packages] dependencies...
Locking [packages] dependencies...
Building requirements...
Resolving dependencies...
✔ Success!
Updated Pipfile.lock (3c44bd)!
Installing dependencies from Pipfile.lock (3c44bd)...
▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉▉ 0/0 — 00:
| https://stackoverflow.com/questions/71321081/ |
What is cudaLaunchKernel in pytorch profiler output | I'm trying to profile my pytorch network to see what is the bottleneck. I noticed that there is an operation called cudaLaunchKernel which is taking up most of the time. This answer says that it is called for every operation done with cuda. If suppose I implement this network in C++ or any other language, would it be possible to reduce this time?
Basically, I'm asking if this overhead is because I've implemented my network in python or will this overhead be always there and impossible to optimize in any language?
Full profiler output:
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Name Self CPU % Self CPU CPU total % CPU total CPU time avg Self CUDA Self CUDA % CUDA total CUDA time avg # of Calls
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
cudaLaunchKernel 99.80% 933.739ms 99.80% 933.739ms 20.750ms 0.000us 0.00% 0.000us 0.000us 45
model_inference 0.05% 453.000us 100.00% 935.567ms 935.567ms 0.000us 0.00% 195.000us 195.000us 1
aten::cudnn_convolution 0.04% 388.000us 99.84% 934.047ms 103.783ms 195.000us 100.00% 195.000us 21.667us 9
aten::_convolution 0.01% 138.000us 99.88% 934.419ms 103.824ms 0.000us 0.00% 195.000us 21.667us 9
aten::conv2d 0.01% 122.000us 99.89% 934.592ms 103.844ms 0.000us 0.00% 195.000us 21.667us 9
aten::add_ 0.01% 112.000us 0.02% 155.000us 17.222us 0.000us 0.00% 0.000us 0.000us 9
aten::upsample_nearest2d 0.01% 82.000us 0.01% 105.000us 26.250us 0.000us 0.00% 0.000us 0.000us 4
aten::empty 0.01% 79.000us 0.01% 79.000us 3.292us 0.000us 0.00% 0.000us 0.000us 24
aten::threshold 0.01% 74.000us 0.02% 149.000us 18.625us 0.000us 0.00% 0.000us 0.000us 8
aten::_cat 0.01% 71.000us 0.01% 119.000us 29.750us 0.000us 0.00% 0.000us 0.000us 4
aten::relu 0.01% 57.000us 0.02% 206.000us 25.750us 0.000us 0.00% 0.000us 0.000us 8
aten::convolution 0.01% 51.000us 99.88% 934.470ms 103.830ms 0.000us 0.00% 195.000us 21.667us 9
aten::view 0.01% 50.000us 0.01% 50.000us 5.556us 0.000us 0.00% 0.000us 0.000us 9
aten::cat 0.00% 32.000us 0.02% 151.000us 37.750us 0.000us 0.00% 0.000us 0.000us 4
aten::reshape 0.00% 29.000us 0.01% 79.000us 8.778us 0.000us 0.00% 0.000us 0.000us 9
aten::resize_ 0.00% 25.000us 0.00% 25.000us 0.962us 0.000us 0.00% 0.000us 0.000us 26
aten::rsub 0.00% 21.000us 0.00% 33.000us 33.000us 0.000us 0.00% 0.000us 0.000us 1
aten::mul 0.00% 17.000us 0.00% 27.000us 27.000us 0.000us 0.00% 0.000us 0.000us 1
aten::zeros 0.00% 13.000us 0.00% 16.000us 16.000us 0.000us 0.00% 0.000us 0.000us 1
cudaEventRecord 0.00% 12.000us 0.00% 12.000us 1.333us 0.000us 0.00% 0.000us 0.000us 9
cudaBindTexture 0.00% 11.000us 0.00% 11.000us 2.750us 0.000us 0.00% 0.000us 0.000us 4
aten::empty_strided 0.00% 6.000us 0.00% 6.000us 6.000us 0.000us 0.00% 0.000us 0.000us 1
aten::zero_ 0.00% 1.000us 0.00% 1.000us 1.000us 0.000us 0.00% 0.000us 0.000us 1
cudnn::maxwell::gemm::computeOffsetsKernel(cudnn::ma... 0.00% 0.000us 0.00% 0.000us 0.000us 195.000us 100.00% 195.000us 195.000us 1
cudaUnbindTexture 0.00% 0.000us 0.00% 0.000us 0.000us 0.000us 0.00% 0.000us 0.000us 4
------------------------------------------------------- ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------ ------------
Self CPU time total: 935.583ms
Self CUDA time total: 195.000us
PS: Some configs
Python version: 3.8.8
PyTorch version: 1.8.1
cudatoolkit version: 10.2.89
cuda version (as given by nvidia-smi): 11.4
CPU specs: intel core i7 10700 @ 2.90GHz 16 cores
GPU specs: NVIDIA GM204GL [Quadro M4000]
RAM: 64GB
GPU RAM: 8GB
OS: 64-bit Ubuntu 20.04.3
PPS: I'm not looking for ways to speed up my code. I want to know if it is possible to speed it up by coding it in a different language like cpp or directly in cuda. (Like suppose if all my data is already on GPU, and I've written my code in cuda language itself, would it run in 195us?)
| According to CUDA docs, cudaLaunchKernel is called to launch a device function, which, in short, is code that is run on a GPU device.
The profiler, therefore, states that a lot of computation is run on the GPU (as you probably expected) and this requires the data structures to be transferred on the device. This may be the source of the bottleneck.
I don't usually develop in CUDA, but perhaps you can speed up the process by coding larger kernels with more operation in CUDA and less CPU/GPU transferrals.
Have a look at this tutorial for more details.
| https://stackoverflow.com/questions/71328662/ |
Pytorch Forecasting vs Darts, experiences welcome | I was wondering which package to use between pytorch forecasting (https://pytorch-forecasting.readthedocs.io/en/stable/) or darts (https://unit8co.github.io/darts/). I have been trying both, it looks like darts is more sklearn-like in its writing and style and pytorch forescasting uses different data classes.
Any comment comparing the two would be welcome.
I don't know if some of you might have carried out a performance comparison between both libraries.
Thanks in advance!
| I think one of the biggest advantage of darts is its Timeseries Object which is very pandas-like and very intuitive when you are familiar with sklearn. However, I also do see the advantage that pytorch-forecasting dealt with categorical data "better" (easier) and it takes a steeper learning curve to understand pytorch-forecasting. I would say pytorch-forecasting sometimes outperform darts using the same model.
| https://stackoverflow.com/questions/71335323/ |
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4000x20 and 200x441) | The architecture of the decoder of my variational autoencoder is given in the snippet below
class ConvolutionalVAE(nn.Module):
def __init__(self, nchannel, base_channels, z_dim, hidden_dim, device, img_width, batch_size):
super(ConvolutionalVAE, self).__init__()
self.nchannel = nchannel
self.base_channels = base_channels
self.z_dim = z_dim
self.hidden_dim = hidden_dim
self.device = device
self.img_width = img_width
self.batch_size = batch_size
self.enc_kernel = 4
self.enc_stride = 2
self._to_linear = None
########################
# ENCODER-CONVOLUTION LAYERS
self.conv0 = nn.Conv2d(nchannel, base_channels, self.enc_kernel, stride=self.enc_stride)
self.bn2d_0 = nn.BatchNorm2d(self.base_channels)
self.LeakyReLU_0 = nn.LeakyReLU(0.2)
out_width = np.floor((self.img_width - self.enc_kernel) / self.enc_stride + 1)
self.conv1 = nn.Conv2d(base_channels, base_channels*2, self.enc_kernel, stride=self.enc_stride)
self.bn2d_1 = nn.BatchNorm2d(base_channels*2)
self.LeakyReLU_1 = nn.LeakyReLU(0.2)
out_width = np.floor((out_width - self.enc_kernel) / self.enc_stride + 1)
self.conv2 = nn.Conv2d(base_channels*2, base_channels*4, self.enc_kernel, stride=self.enc_stride)
self.bn2d_2 = nn.BatchNorm2d(base_channels*4)
self.LeakyReLU_2 = nn.LeakyReLU(0.2)
out_width = np.floor((out_width - self.enc_kernel) / self.enc_stride + 1)
self.conv3 = nn.Conv2d(base_channels*4, base_channels*8, self.enc_kernel, stride=self.enc_stride)
self.bn2d_3 = nn.BatchNorm2d(base_channels*8)
self.LeakyReLU_3 = nn.LeakyReLU(0.2)
out_width = int(np.floor((out_width - self.enc_kernel) / self.enc_stride + 1))
########################
#ENCODER-USING FULLY CONNECTED LAYERS
#THE LATENT SPACE (Z)
self.flatten = nn.Flatten()
self.fc0 = nn.Linear((out_width**2) * base_channels * 8, base_channels*8*4*4, bias=False)
self.bn1d = nn.BatchNorm1d(base_channels*8*4*4)
self.fc1 = nn.Linear(base_channels*8*4*4, hidden_dim, bias=False)
self.bn1d_1 = nn.BatchNorm1d(hidden_dim)
# mean of z
self.fc2 = nn.Linear(hidden_dim, z_dim, bias=False)
self.bn1d_2 = nn.BatchNorm1d(z_dim)
# variance of z
self.fc3 = nn.Linear(hidden_dim, z_dim, bias=False)
self.bn1d_3 = nn.BatchNorm1d(z_dim)
########################
# DECODER:
# P(X|Z)
conv2d_transpose_kernels, conv2d_transpose_input_width = self.determine_decoder_params(self.z_dim, self.img_width)
self.conv2d_transpose_input_width = conv2d_transpose_input_width
self.px_z_fc_0 = nn.Linear(self.z_dim, conv2d_transpose_input_width ** 2)
self.px_z_bn1d_0 = nn.BatchNorm1d(conv2d_transpose_input_width ** 2)
self.px_z_fc_1 = nn.Linear(conv2d_transpose_input_width ** 2, conv2d_transpose_input_width ** 2)
#self.unflatten = nn.Unflatten(1, (1, conv2d_transpose_input_width, conv2d_transpose_input_width))
self.conv2d_transpose_input_width = conv2d_transpose_input_width
self.px_z_conv_transpose2d = nn.ModuleList()
self.px_z_bn2d = nn.ModuleList()
self.n_conv2d_transpose = len(conv2d_transpose_kernels)
self.px_z_conv_transpose2d.append(nn.ConvTranspose2d(1, self.base_channels * (self.n_conv2d_transpose - 1),
kernel_size=conv2d_transpose_kernels[0], stride=2))
self.px_z_bn2d.append(nn.BatchNorm2d(self.base_channels * (self.n_conv2d_transpose - 1)))
self.px_z_LeakyReLU = nn.ModuleList()
self.px_z_LeakyReLU.append(nn.LeakyReLU(0.2))
for i in range(1, self.n_conv2d_transpose - 1):
self.px_z_conv_transpose2d.append(nn.ConvTranspose2d(self.base_channels * (self.n_conv2d_transpose - i),
self.base_channels*(self.n_conv2d_transpose - i - 1),
kernel_size=conv2d_transpose_kernels[i], stride=2))
self.px_z_bn2d.append(nn.BatchNorm2d(self.base_channels * (self.n_conv2d_transpose - i - 1)))
self.px_z_LeakyReLU.append(nn.LeakyReLU(0.2))
self.px_z_conv_transpose2d.append(nn.ConvTranspose2d(self.base_channels, self.nchannel,
kernel_size=conv2d_transpose_kernels[-1], stride=2))
self.device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
self.to(device=self.device)
def decoder(self, z_input):
#Generate X: P(X|Z)
h = F.relu(self.px_z_bn1d_0(self.px_z_fc_0(z_input)))
flattened_h = self.px_z_fc_1(h)
h = flattened_h.view(flattened_h.size()[0], 1, self.conv2d_transpose_input_width, self.conv2d_transpose_input_width)
for i in range(self.n_conv2d_transpose - 1):
h = self.px_z_LeakyReLU[i](self.px_z_bn2d[i](self.px_z_conv_transpose2d[i](h)))
x_recons_mean_flat = torch.sigmoid(self.px_z_conv_transpose2d[self.n_conv2d_transpose - 1](h))
return x_recons_mean_flat
running my code to reconstruct the images:
all_z = []
for d in range(self.z_dim):
temp_z = torch.cat( [self.z_sample_list[k][:, d].unsqueeze(1) for k in range(self.K)], dim=1)
print(f'size of each z component dimension: {temp_z.size()}')
all_z.append(torch.mm(temp_z.transpose(1, 0), components).unsqueeze(1))
out = torch.cat( all_z,1)
x_samples = self.decoder(out)
I got this error message:
size of z dimension: 200
size of each z component dimension: torch.Size([50, 20])
size of all z component dimension: torch.Size([20, 200, 20])
x_samples = self.decoder(out)
File "VAE.py", line 241, in decoder
h = F.relu(self.px_z_bn1d_0(self.px_z_fc_0(z_input)))
File "/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 96, in forward
return F.linear(input, self.weight, self.bias)
File "/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py", line 1847, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (4000x20 and 200x441)
Update
I changed my code slightly to this
all_z = []
for d in range(self.z_dim):
temp_z = torch.cat( [self.z_sample_list[k][:, d].unsqueeze(1) for k in range(self.K)], dim=1)
all_z.append(torch.mm(temp_z.transpose(1, 0), components).unsqueeze(1))
out = torch.cat( all_z,1)
print(f'size of all z component dimension: {out.size()}')
out = F.pad(input=out, pad=(1, 0, 0,0, 0, 1), mode='constant', value=0)
print(f'new size of all z component dimension after padding: {out.size()}')
out = rearrange(out, 'd0 d1 d2 -> d1 (d0 d2)')
x_samples = self.decoder(out)
Now the new error is
x_samples = self.decoder(out)
File "VAE.py", line 243, in decoder
h = F.relu(self.px_z_bn1d_0(self.px_z_fc_0(z_input)))
File "/anaconda3/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
return forward_call(*input, **kwargs)
File "/anaconda3/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 96, in forward
return F.linear(input, self.weight, self.bias)
File "/anaconda3/lib/python3.8/site-packages/torch/nn/functional.py", line 1847, in linear
return torch._C._nn.linear(input, weight, bias)
RuntimeError: mat1 and mat2 shapes cannot be multiplied (200x441 and 200x441)
Any suggestion to fix this error?
| Matrix multiplication requires the 2 inner dimensions to be the same. You are getting the error: RuntimeError: mat1 and mat2 shapes cannot be multiplied (200x441 and 200x441) because your inner dimensions don't line up.
for example:
shape(200, 441) * shape(441, 200) # works
shape(441, 200) * shape(200, 441) # works
shape(200, 441) * shape(200, 441) # doesn't work, this is why you are getting your error
# in general
shape(x, y) * shape(y, z) # works
To make the inner dimensions match, just take the transpose of one or the other:
shape(200, 441) * shape(200, 441).T # works
# or
shape(200, 441).T * shape(200, 441) # works
# since the transpose works by swapping the dimensions:
shape(200, 441).T = shape(441, 200)
| https://stackoverflow.com/questions/71345425/ |
How do I compute batched sample covariance in PyTorch? | Say I have data, a batched tensor of collections of data points of size (B, N, D) where B is my batch size, N is the number of data samples in each collection, and D is the length of my data vectors. I want to compute the sample mean and covariance for each collection of data points, but do it in batch.
To compute the mean I can do data.mean(dim=1) and I get a tensor of size (B, D) representing the mean of each collection. I assumed I'd be able to do a similar thing with torch.cov but it does not offer the ability to do it in batch. Is there another way to achieve this? I'm expecting to get a batch of covariance matrices of shape (B, D, D).
| This does the trick:
def batch_cov(points):
B, N, D = points.size()
mean = points.mean(dim=1).unsqueeze(1)
diffs = (points - mean).reshape(B * N, D)
prods = torch.bmm(diffs.unsqueeze(2), diffs.unsqueeze(1)).reshape(B, N, D, D)
bcov = prods.sum(dim=1) / (N - 1) # Unbiased estimate
return bcov # (B, D, D)
Here is a script to test that it's computing the same thing that the non-batched PyTorch version computes:
import time
import torch
B = 10000
N = 50
D = 2
points = torch.randn(B, N, D)
start = time.time()
my_covs = batch_cov(points)
print("My time: ", time.time() - start)
start = time.time()
torch_covs = torch.zeros_like(my_covs)
for i, batch in enumerate(points):
torch_covs[i] = batch.T.cov()
print("Torch time:", time.time() - start)
print("Same?", torch.allclose(my_covs, torch_covs, atol=1e-7))
Which gives me:
My time: 0.00251793861318916016
Torch time: 0.2459864616394043
Same? True
I can't claim mine will be inherently faster than iteratively computing them, it seems as D gets bigger mine will slow down much more, so there's probably a nicer way to scale with data dimension size.
| https://stackoverflow.com/questions/71357619/ |
Pytorch netwrok with variable number of hidden layers | I want to create a class that creates a simple network with X fully connected layers, where X is an input given by the user. I tried this using the setattr/getattr but for some reason is not working.
class MLP(nn.Module):
def __init__(self,in_size, out_size,n_layers, hidden_size):
super(MLP,self).__init__()
self.n_layers=n_layers
for i in range(n_layers):
if i==0:
layer_in_size = in_size
else:
layer_in_size = hidden_size
if i==(n_layers-1):
layer_out_size = out_size
else:
layer_out_size = hidden_size
setattr(self,'dense_{}'.format(i), nn.Linear(layer_in_size,layer_out_size))
def forward(self,x):
out = x
for i in range(self.n_layers):
if i==(self.n_layers-1):
out = getattr(self,'dense_{}'.format(i),out)
else:
out = F.relu(getattr(self,'dense_{}'.format(i),out))
return out
This is the error im getting when trying a forward pass with the net:
enter image description here
Some insights of what's the issue will be helpful.
| This seems like a problem with forward implementation with the mod2 function. Try the pytorch functions (torch.fmod and torch.remainder) or if you don't need the backprop capabilities try to do .detach() before the mod2 function.
| https://stackoverflow.com/questions/71369361/ |
How to handle that error in pytorch: expected Long but found double | I'm given a function that is supposed to calculate the square-root of a matrix
import torch
from torch.autograd import Function
class MatrixSquareRoot(Function):
"""Square root of a positive definite matrix.
NOTE: matrix square root is not differentiable for matrices with
zero eigenvalues.
See Lin, Tsung-Yu, and Subhransu Maji.
"Improved Bilinear Pooling with CNNs." BMVC 17
"""
@staticmethod
def forward(ctx, input):
dim = input.shape[0]
norm = torch.norm(input.double())
Y = input/norm
I = torch.eye(dim,dim,device=input.device).type(input.dtype)
Z = torch.eye(dim,dim,device=input.device).type(input.dtype)
for i in range(15):
T = 0.5*(3.0*I - Z.mm(Y))
Y = Y.mm(T)
Z = T.mm(Z)
sqrtm = Y*torch.sqrt(norm)
#ctx.mark_dirty(Y,I,Z)
ctx.save_for_backward(sqrtm)
return sqrtm #, I, Y, Z
@staticmethod
def backward(ctx, grad_output):
grad_input = None
sqrtm, = ctx.saved_tensors
dim = sqrtm.shape[0]
norm = torch.norm(sqrtm)
A = sqrtm/norm
I = torch.eye(dim, dim, device=sqrtm.device).type(sqrtm.dtype)
Q = grad_output/norm
for i in range(15):
Q = 0.5*(Q.mm(3.0*I-A.mm(A))-A.t().mm(A.t().mm(Q)-Q.mm(A)))
A = 0.5*A.mm(3.0*I-A.mm(A))
grad_input = 0.5*Q
return grad_input
sqrtm = MatrixSquareRoot.apply # call: sqrtm(tensor of size d x d)
But when I'm trying to apply it to a matrix that has a square root, I'm getting that error:
>>> x = torch.tensor([[1,-12],[0,4]])
>>> sqrtm(x)
---------------------------------------------------------------------------
<ipython-input-28-9310ac844935> in forward(ctx, input)
15 Z = torch.eye(dim,dim,device=input.device).type(input.dtype)
16 for i in range(15):
---> 17 T = 0.5*(3.0*I - Z.mm(Y))
18 Y = Y.mm(T)
19 Z = T.mm(Z)
RuntimeError: expected scalar type Long but found Double
I also tried to convert to Long by calling sqrtm(x.type(torch.LongTensor)) instead but it does produce the same error.
| just change the tensor type as follows
x = torch.tensor([[1,-12],[0,4]],dtype=torch.float)
sqrtm(x)
| https://stackoverflow.com/questions/71371124/ |
Segfault while importing torchvision.transforms | I'm getting a segfault in python during imports.
This code:
import os
import matplotlib.pyplot as plt
import numpy as np
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
print("was there")
from torchvision import transforms
print("didn't get there")
from torchvision import datasets
from torchvision import models
returns this:
$ python3 -u classifier.py
was there
Erreur de segmentation (core dumped)
So torchvision.transforms seems to be responsible. I've tried switching the lines, and torchvision.models fails too.
I've also tried importing torchvision.transforms on it's own and there were no problems. What could possibly cause this?
Edit:
I'm working on Ubuntu 20.04.4 and installed torchvision through pip.
| So I moved the torchvision.transforms import to above the matplotlib.pyplot one, and somehow neither torchvision.transforms nor torchvision.models cause a segfault anymore. It still caused a segfault with torchvision.transforms right after matplotlib.pyplot.
Here is what the final code looks like:
import os
from torchvision import transforms
import matplotlib.pyplot as plt
import numpy as np
import torch
from torch import nn
from torch import optim
import torch.nn.functional as F
from torchvision import datasets
from torchvision import models
At least my code works, but I feel like there must be an underlying problem that this doesn't adress...
| https://stackoverflow.com/questions/71372006/ |
Difference between model.parameters and model.parameters(), pytorch | I have read through the documentation and I don't really understand the explanation. Here is the explanation I got from the documentation Returns an iterator over module parameters. Why does model.parameters() return the file location e.g <generator object Module.parameters at 0x7f1b90c29ad0>. model.parameters will give me an output of
<bound method Module.parameters of ResNet9(
(conv1): Sequential(
(0): Conv2d(3, 64, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(64, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
)
(conv2): Sequential(
(0): Conv2d(64, 128, kernel_size=(3, 3), stride=(1, 1), padding=(1, 1))
(1): BatchNorm2d(128, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True)
(2): ReLU(inplace=True)
(3): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
).....
| model.parameters()
It's simply because it returns an iterator object, not a list or something like that. But it behaves quite similar to a list. You can iterate over it eg with
[x for x in model.parameters()]
Or you can convert it to a list
[list(model.parameters())]
Iterators have some advantages over lists. Eg they are not "calculated" when they are created, which can improve performance.
For more on iterators just google for "python iterators", you'll find plenty of information. Eg w3schools.com is a good source.
model.parameters
The output model.parameters consists of two parts.
The first part bound method Module.parameters of tells you that you are referencing the method Module.parameters.
The second part tells you more about the object containing the referenced method. It' s the "object description" of your model variable. It's the same as print(model)
more on python references
model.parameter is just a reference to the parameter function, it's not executing the function. model.parameter() instead is executing it.
Maybe it gets more clear with a simple example.
print("Hello world")
>>> Hello world
print
>>> <function print>
abc = print
abc("Hello world")
>>> Hello world
abc
>>> <function print>
As you see abc behave exactly the same as print because i assigned the reference of print to abc.
If I would have executed the function instead, eg abc = print("Hello world"), abc would contain the string Hello world and not the function reference to print.
| https://stackoverflow.com/questions/71376622/ |
Dict support in PyTorch | Does PyTorch support dict-like objects, through which we can backpropagate gradients, like Tensors in PyTorch?
My goal is to compute gradients with respect to a few (1%) elements of a large matrix. But if I use PyTorch's standard Tensors to store the matrix, I need to keep the whole matrix in my GPU, which causes problems due to limited GPU memory available during training. So I was thinking whether I could store the matrix as a dict instead, indexing only the relevant elements of the matrix, and computing gradients and backpropagating w.r.t those select elements only.
So far, I have tried using Tensors only, but it's causing memory issues for the above reasons. So I searched extensively for alternate options like dicts in PyTorch but couldn't find any such information on Google.
| It sounds like you want your parameter to be a torch.sparse tensor.
This interface allows you to have tensors that are mostly zeros, with only a few non-zero elements in known locations. Sparse tensors should allow you to significantly reduce the memory footprint of your model.
Note that this interface is still "under construction": not all operations are supported for sparse tensors. However, it is being constantly improving.
| https://stackoverflow.com/questions/71395783/ |
Does Fine-tunning Bert Model in multiple times with different dataset make it more accuracy? | i'm totally new in NLP and Bert Model.
What im trying to do right now is Sentiment Analysis on Twitter Trending Hashtag ("neg", "neu", "pos") by using DistilBert Model, but the accurazcy was about 50% ( I tried w Label data taken from Kaggle).
So here is my idea:
(1) First, I will Fine-tunning Distilbertmodel (Model 1) with IMDB dataset,
(2) After that since i've got some data took from Twitter post, i will sentiment analysis them my Model 1 and get Result 2.
(3) Then I will refine-tunning Model 1 with the Result 2 and expecting to have Model (3).
Im not really sure this process has any meaning to make the model more accuracy or not.
Thanks for reading my post.
| If you want to fine-tune a sentiment classification head of BERT for classifying tweets, then I'd recommend a different strategy:
IMDB dataset is a different kind of sentiment - the ratings do not really correspond with short post sentiment, unless you want to focus on tweets regarding movies.
using classifier output as input for further training of that classifier is not really a good approach, because, if the classifier made many mistakes while classifying, these will be reflected in the training, and so the errors will deapen. This is basically creating endogenous labels, which will not really improve your real-world classification.
You should consider other ways of obtaining labelled training data. There are a few good examples for twitter:
Twitter datasets on Kaggle - there are plenty of datasets available containing millions of various tweets. Some of those even contain sentiment labels (usually inferred from emoticons, as these were proven to be more accurate than words in predicting sentiment - for explanation see e.g. Frasincar 2013). So that's probably where you should look.
Stocktwits (if youre interested in financial sentiments)- contain posts that authors can label for sentiments, thus are a perfect way of mining labelled data, if stocks/cryptos is what you're looking for.
Another thing is picking a model that's better for your language, I'd recommend this one. It has been pretrained on 80M tweets, so should provide strong improvements. I believe it even contains a sentiment classification head that you can use.
Roberta Twitter Base
Check out the website for that and guidance for loading the model in your code - it's very easy, just use the following code (this is for sentiment classification):
MODEL = "cardiffnlp/twitter-roberta-base-sentiment"
tokenizer = AutoTokenizer.from_pretrained(MODEL)
model = AutoModelForSequenceClassification.from_pretrained(MODEL)
Another benefit of this model is that it has been pretrained from scratch with a vocabulary that contains emojis, meaning it has a deep understanding of them, their typical contexts and co-occurences. This can greatly benefit a social media classification, as many researchers would agree that emojis/emoticons are better predictors of sentiment than normal words.
| https://stackoverflow.com/questions/71404582/ |
Python Pytorch Multiprocessing Pycharm PicklingError: Can't pickle : attribute lookup train on __main__ failed | This error happens when running multiprocessing (using spawn method) in Python or Pytorch (torch.multiprocessing) using Pycharm 2021.2.3.
The function train is defined at the top level of the module, so it should be pickable. However, the error says that it cannot be pickled.
A simple code may look like this:
def train(gpu):
print(f'hello {gpu}!')
if __name__ == '__main__':
mp.spawn(train, nprocs=2)
| Seems like this is a bug in Pycharm 2021.2.3 and happens when Run with Python Console is checked in run configurations. This bug is being tracked at https://youtrack.jetbrains.com/issue/PY-50116
This can be resolved using the following two options (until the bug is resolved):
Uncheck Run with Python Console
Downgrade to Pycharm 2021.1.3
| https://stackoverflow.com/questions/71417006/ |
Simple MultiGPU during inference with huggingface | I have two GPU.
How can I use them for inference with a huggingface pipeline?
Huggingface documentation seems to say that we can easily use the DataParallel class with a huggingface model, but I've not seen any example.
For example with pytorch, it's very easy to just do the following :
net = torch.nn.DataParallel(model, device_ids=[0, 1, 2])
output = net(input_var) # input_var can be on any device, including CPU
Is there an equivalent with huggingface ?
| I found it's not possible with the pipelines, so:
two ways :
Do it with the Trainer object in huggingface , which also supports inferences, but it's not optimal.
Use Queues from the multiprocessing standard library, but this creates a lot of boiler plate code
| https://stackoverflow.com/questions/71417355/ |
How does Pytorch no_grad function for a = a - b and a -= b type of operation? | import torch
def model(x, W, b):
return x@W + b
def mse(t1, t2):
diff = t1 - t2
return torch.sum(diff * diff) / diff.numel()
inputs = torch.rand(2, 3, requires_grad=True)
targets = torch.rand( 2,2, requires_grad=True)
W = torch.rand(3, 2, requires_grad=True)
b = torch.rand(2, requires_grad=True)
pred = model(inputs, W, b)
loss = mse(pred, targets)
loss.backward()
print(W.grad)
print(b.grad)
with torch.no_grad():
W -= W.grad * 1e-5
b -= b.grad * 1e-5
print(W.grad)
print(b.grad)
For the above example, the output of the last 2 print statements is the same as that of the first two print statements.
But, for the code snippet below, the 2 last print statements give result as None.
I can't understand why it is so?
print(W.grad)
print(b.grad)
with torch.no_grad():
W = W - W.grad * 1e-5
b = b - b.grad * 1e-5
print(W.grad)
print(b.grad)
| Keep in mind in both scenarios you are under the torch.no_grad context manager which by effect disables gradient computation.
On one hand, you are performing an in-place operation on your tensor which means their underlying data gets modified without changing the reference two that tensor storage in memory, moreover its metadata remains unchanged, that is W and b are and remain tensors which require gradient (as defined in the very first assignments with requires_grad=True).
On the other, you are performing out-of-place operations which means variables W and b both get assigned brand new tensors. Indeed, out-of-place assignments create copies. Therefore the W and b are no longer the ones defined prior to the assignment but different ones. Not only their values are different, but the tensors' metadata itself has changed. Finally, the reason why you have None is that tensors defined under this context manager will not have a requires_grad=True set by definition of the context.
| https://stackoverflow.com/questions/71420187/ |
How to slice 2D Torch tensor individually per row? | I have a 2D tensor in Pytorch that I would like to slice:
x = torch.rand((3, 5))
In this example, the tensor has 3 rows and I want to slice x, creating a new tensor y that also has 3 rows and num_col cols.
What's challenging for me is that I want to slice different columns per row. All I have is x, num_cols, and idx, which is a tensor holding the start index from where to slice.
Example:
What I have is num_cols=2, idx=[1,2,3] and
x=torch.arange(15).reshape((3,-1)) =
tensor([[ 0, 1, 2, 3, 4],
[ 5, 6, 7, 8, 9],
[10, 11, 12, 13, 14]])
What I want is
y=
tensor([[ 1, 2],
[ 7, 8],
[13, 14]])
What's the "torch"-way of doing this? I know, I can slice if I get a boolean mask somehow, but I don't know how to construct that with idx and num_cols without normal Python loops.
| You could use fancy indexing together with broadcasting. Another solution might be to use torch.gather which is similar to numpy's take_along_axis. Your idx array would need to be extended with the extra column:
x = torch.arange(15).reshape(3,-1)
idx = torch.tensor([1,2,3])
idx = torch.column_stack([idx, idx+1])
torch.gather(x, 1, idx)
output:
tensor([[ 1, 2],
[ 7, 8],
[13, 14]])
| https://stackoverflow.com/questions/71425677/ |
How is self() used in Pytorch to generate predictions? | class MNIST_model(nn.Module):
def __init__(self):
super().__init__()
self.linear = nn.Linear(input_size, num_classes)
def forward(self, xb):
xb = xb.reshape(-1, 28 * 28)
out = self.linear(xb)
return out
def training_step(self, batch):
images, labels = batch
out = self(images)
loss = F.cross_entropy(out, labels)
return loss
I am following the Freecodecamp tutorial.
In the training_step method, the tutorial says
out = self(images) is used to Generate Predictions.
I am not able to understand how is self is being used to get the predictions.
| This is actually nothing specific to PyTorch but rather to how Python works.
Using parenthesis on an object or directly on self inside that class will call a special Python function named __call__. This function is available to your class because you're inheriting from nn.Module which implemented it for you.
Here's a minimal example outside of PyTorch:
class A():
def __call__(self):
print('calling the object')
def foo(self):
self()
Then
>>> x = A()
>>> x.foo() # prints out "calling the object"
| https://stackoverflow.com/questions/71427254/ |
Pytorch Python Distributed Multiprocessing: Gather/Concatenate tensor arrays of different lengths/sizes | If you have tensor arrays of different lengths across several gpu ranks, the default all_gather method does not work as it requires the lengths to be same.
For example, if you have:
if gpu == 0:
q = torch.tensor([1.5, 2.3], device=torch.device(gpu))
else:
q = torch.tensor([5.3], device=torch.device(gpu))
If I need to gather these two tensor arrays as follows:
all_q = [torch.tensor([1.5, 2.3], torch.tensor[5.3])
the default torch.all_gather does not work as the lengths, 2, 1 are different.
| As it is not directly possible to gather using built in methods, we need to write custom function with the following steps:
Use dist.all_gather to get sizes of all arrays.
Find the max size.
Pad local array to max size using zeros/constants.
Use dist.all_gather to get all padded arrays.
Unpad the added zeros/constants using sizes found in step 1.
The below function does this:
def all_gather(q, ws, device):
"""
Gathers tensor arrays of different lengths across multiple gpus
Parameters
----------
q : tensor array
ws : world size
device : current gpu device
Returns
-------
all_q : list of gathered tensor arrays from all the gpus
"""
local_size = torch.tensor(q.size(), device=device)
all_sizes = [torch.zeros_like(local_size) for _ in range(ws)]
dist.all_gather(all_sizes, local_size)
max_size = max(all_sizes)
size_diff = max_size.item() - local_size.item()
if size_diff:
padding = torch.zeros(size_diff, device=device, dtype=q.dtype)
q = torch.cat((q, padding))
all_qs_padded = [torch.zeros_like(q) for _ in range(ws)]
dist.all_gather(all_qs_padded, q)
all_qs = []
for q, size in zip(all_qs_padded, all_sizes):
all_qs.append(q[:size])
return all_qs
Once, we are able to do the above, we can then easily use torch.cat to further concatenate into a single array if needed:
torch.cat(all_q)
[torch.tensor([1.5, 2.3, 5.3])
Adapted from: github
| https://stackoverflow.com/questions/71433507/ |
output with shape [64, 1] doesn't match the broadcast shape [64, 2] | I got above error when trying to pass weighted class to BCELoss (using pytorch). As you can see below. My model is Resnet with Sigmoid. I guess the model expect one class value instead of two becouse its Sigmoid.
But which one of the value percentage, I should pass. The percentage of postive value (with 1) or negative (with 0)
class_weights2=[postive/(negtive+postive),negtive/(negtive+postive)]
print(class_weights2)
# [0.3135668226071564, 0.6864331773928436]
class_weights=torch.tensor(class_weights2,dtype=torch.float)
lossFunc= torch.nn.BCELoss(class_weights)
and this the model:
model = torchvision.models.resnet50(pretrained=False)
model.fc = torch.nn.Sequential(
torch.nn.Linear(
in_features=2048,
out_features=1
),
torch.nn.Sigmoid()
)
| The weights passed to BCELoss are not class weights. They rescale the contribution of each element in the batch.
From the docs:
a manual rescaling weight given to the loss of each batch element. If
given, has to be a Tensor of size nbatch.
| https://stackoverflow.com/questions/71444220/ |
Why tanh function return different in tensorflow and pytorch? | I find that tensorflow and pytorch tanh result is different, I want to know why did this happen?
I know that the difference is very small, so is this acceptable?
import numpy as np
import tensorflow as tf
import torch
np.random.seed(123)
tf.random.set_seed(123)
torch.manual_seed(123)
batch, sentence_length, embedding_dim = 20, 5, 10
value = np.random.random((batch, sentence_length, embedding_dim)).astype("f")
value = value * 10
tf_x = tf.constant(value, dtype=tf.float32)
tf_out = tf.math.tanh(tf_x)
pt_x = torch.from_numpy(value)
pt_out = torch.tanh(pt_x)
print((tf_out.numpy() == pt_out.numpy()).all()) # return False
print(((tf_out.numpy() - pt_out.numpy()) < 1e-6).all()) # return True
tensorflow == 2.5.0
torch == 1.9.0
| Running your code with the following line at the end:
print(np.allclose(tf_out.numpy(), pt_out.numpy())) # Returns True
You will receive True. I do not know exactly how tensorflow and pytorch compute the tanh oppeartion, but when working with floating points, you rarely are exactely equal. However, you should be receiving equal results up to a certain tolerance, which is exactly what np.allclose() checks. Read more onallclose here
| https://stackoverflow.com/questions/71446623/ |
Pytorch: How to make a custom Dataloader for CNN? | I'm trying to create my own Dataloader from a custom dataset for a CNN. The original Dataloader was created by writing:
train_loader = torch.utils.data.DataLoader(mnist_data, batch_size=64)
If I check the shape of the above, I get
i1, l1 = next(iter(train_loader))
print(i1.shape) # torch.Size([64, 1, 28, 28])
print(l1.shape) # torch.Size([64])
When I feed this train_loader into my CNN, it works beautifully. However, I have a custom dataset. I have done the following:
mnist_data = datasets.MNIST('data', train=True, download=True, transform=transforms.ToTensor())
trainset = mnist_data
testset = mnist_data
x_train = np.array(trainset.data)
y_train = np.array(trainset.targets)
# modify x_train/y_train
Now, how would I be able to take x_train, y_train and make it into a Dataloader similar to the first one? I have done the following:
train_data = []
for i in range(len(x_train)):
train_data.append([x_train[i], y_train[i]])
train_loader = torch.utils.data.DataLoader(train_data, batch_size=64)
for i, (images, labels) in enumerate(train_loader):
images = images.unsqueeze(1)
However, I'm still missing the channel column (which should be 1). How would I fix this?
| I don't have access to your x_train and y_train, but probably this works:
from torch.utils.data import TensorDataset, DataLoader
# use x_train and y_train as numpy array without further modification
x_train = np.array(trainset.data)
y_train = np.array(trainset.targets)
# convert to numpys to tensor
tensor_x = torch.Tensor(x_train)
tensor_y = torch.Tensor(y_train)
# create the dataset
custom_dataset = TensorDataset(tensor_x,tensor_y)
# create your dataloader
my_dataloader = DataLoader(custom_dataset,batch_size=1)
#check if you can get the desired things
i1, l1 = next(iter(my_dataloader))
print(i1.shape) # torch.Size([1, 1, 28, 28])
print(l1.shape) # torch.Size([1])
| https://stackoverflow.com/questions/71453455/ |
Altering pytorch resnet head from sigmoid to Softmax | I'm new to pytorch. I wrote the below code to do predication using Resnet with Sigmoid for binary classification. I just need to change it to softmax because I might have more than 2 classes.
I understood that pytorch, unlike, Keras, the softmax is in the CrossEntropyLoss. So I'm not sure how could I change the top layer to make the model uses softmax:
model = torchvision.models.resnet50(pretrained=False)
model.fc = torch.nn.Sequential(
torch.nn.Linear(
in_features=2048,
out_features=1
) , torch.nn.Sigmoid()
)
model = model.cpu()
and later:
lossFunc=torch.nn.BCELoss(class_weights)
| You can try this:
model.fc[1] = torch.nn.Softmax(10)
where 10 are the number of classes, you can put value based on your needs.
| https://stackoverflow.com/questions/71462468/ |
How to measure performance of a pretrained HuggingFace language model? | I am pretraining a GPT2LMHeadModel using Trainer as follows:
training_args = TrainingArguments(
output_dir=str(project_root / 'models/bn-gpt2/'),
overwrite_output_dir=True,
num_train_epochs=1,
per_device_train_batch_size=1,
per_device_eval_batch_size=1,
gradient_accumulation_steps=4,
fp16=True,
optim="adafactor",
eval_steps=400,
save_steps=800,
warmup_steps=500,
evaluation_strategy="steps",
)
trainer = Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=tokenized_dataset['train'],
eval_dataset=tokenized_dataset['test'],
)
trainer.train()
I want to measure the performance of my pre-trained model using perplexity or accuracy metrics during and after training. I have found some ways to measure these for individual sentences, but I cannot find a way to do this for the complete model. My goal is to create a next word prediction model for my native language using GPT2 training from scratch.
| If I understand it correctly then this tutorial shows how to calculate perplexity for the entire test set. If I see it correctly they use the entire test corpus as one string connected by linebreaks, which might have to do with the fact that perplexity uses a sliding window which uses the text that came previous in the corpus. I personally did not calculate perplexity for a model yet and am not an expert at this. In any case you could average the sentence score into a corpus score, although there might be issues with the logic of how that metric works as well as the weighting since sentences can have a different number of words, see this explaination.
Also I'm not sure if you are already aware of this but there is also a pretrained GPT-2 model available for Bengali on huggingface.
| https://stackoverflow.com/questions/71466639/ |
Unable to train PyTorch model in GPU. Keep getting errors that tensors are not on same device | I have been stuck at trying to train my PyTorch model in GPU. The model perfectly works in CPU though. I have been using Google Colab's GPU resources for using cuda.
I know that in order to run a model in GPU, the 'model', 'input features' and 'target' needs to be in 'cuda' device.
But, no matter what I do in my code, I either keep getting the error:
RuntimeError: Input and hidden tensors are not at the same device, found input tensor at cuda:0 and hidden tensor at cpu
OR
RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu!
Here is my notebook:
https://colab.research.google.com/drive/1rviS_4hmdzPQUncZyi8FsRH7y3jL0isQ
It would be really helpful if someone could let me exactly which variables to be moved using .to('cuda')
Additionally, explanations/suggestions for ensuring that this does not recur in the future would be highly appreciated. Thank you !
| Your self.hidden is a tuple of torch.tensors. PyTorch doesn't automatically move these kind of tensor to GPU when .to(device) is invoked on your model.
You can either:
Implement your own to(self, type, device) method for your BiLSTM_CRF class. (Not recommended).
Make self.hidden a registered buffer. This way all methods of nn.Module such as .to(), .float(), etc. will also be applied to self.hidden.
| https://stackoverflow.com/questions/71467398/ |
Why am I getting different results when I use models with the same weights in different formats - \(.pt) \.onnx \(.bin, .xml)? | I have a model trained on YOLOv5s and is working fine.
This is an input image:
I can get an expected result using pytorch after doing an inference:
This is an output image:
The thing is, I need it in Openvino and regardless if I do the inference using the model in .onnx or .bin and .xml (for openvino) I won't get the expected inference result.
What I get is a vector with this shape (1, 25200, 6).
I know that:
25200 is equal to 1x3x80x80 + 1x3x40x40 + 1x3x20x20;
6 = 1 class + 4 (x,y,w,h) + 1 (score);
batch_size = 1
To export it, I used:
!python export.py --data models/custom_yolov5s.yaml --weights /content/bucket_11_03_2022.pt --batch-size 1 --device cpu --include openvino --imgsz 640
and to reproduce the issue I did in two ways:
.onnx:
import cv2
image = cv2.imread('data/cropped.png')
# Resize image to meet network expected input sizes
resized_image = cv2.resize(image, (640, 640))
# Reshape to network input shape
input_image = np.expand_dims(resized_image.transpose(2, 0, 1), 0)
import onnxruntime as onnxrt
onnx_session= onnxrt.InferenceSession("models/bucket_11_03_2022.onnx")
onnx_inputs= {onnx_session.get_inputs()[0].name:input_image.astype(np.float32)}
onnx_output = onnx_session.run(None, onnx_inputs)
img_label = onnx_output[0]
print(onnx_output[0].shape)
Openvino:
import cv2
import matplotlib.pyplot as plt
import numpy as np
from openvino.inference_engine import IECore
ie = IECore()
net = ie.read_network(
model="bucket_11_03_2022.xml",
weights="bucket_11_03_2022.bin",
)
exec_net = ie.load_network(net, "CPU")
output_layer_ir = next(iter(exec_net.outputs))
input_layer_ir = next(iter(exec_net.input_info))
# Text detection models expects image in BGR format
image = cv2.imread("data/cropped.png")
# N,C,H,W = batch size, number of channels, height, width
N, C, H, W = net.input_info[input_layer_ir].tensor_desc.dims
# Resize image to meet network expected input sizes
resized_image = cv2.resize(image, (W, H))
# Reshape to network input shape
input_image = np.expand_dims(resized_image.transpose(2, 0, 1), 0)
plt.imshow(cv2.cvtColor(image, cv2.COLOR_BGR2RGB));
result = exec_net.infer(inputs={input_layer_ir: input_image})
result['output'].shape
Could you guys help me to get the correct inference (bounding box with score) using .onnx or the IE format (openvino - .bin, .xml)?
The model files are here.
| Based on my replication, this issue occurred due to incorrect conversion from PyTorch to ONNX. I’ve found that the converted ONNX from the PyTorch model was able to detect the object (bucket) but did not reflect the correct label as it took one of the class names from coco128.yaml.
You may need to retrain your model by following the Train Custom Data. But I cannot guarantee this method will be successful as it is not validated by OpenVINO.
I suggest you post this issue in ultralytics GitHub forum. For your information, ultralytics is not a part of OpenVINO Toolkit.
| https://stackoverflow.com/questions/71470314/ |
How to add data augmentation with albumentation to image classification framework? | I am using pytorch for image classification using this code from github.
I need to add data augmentation before training my model,
I chose albumentation to do this.
here is my code when I add albumentation:
data_transform = {
"train": A.Compose([
A.RandomResizedCrop(224,224),
A.HorizontalFlip(p=0.5),
A.RandomGamma(gamma_limit=(80, 120), eps=None, always_apply=False, p=0.5),
A.RandomBrightnessContrast (p=0.5),
A.CLAHE(clip_limit=4.0, tile_grid_size=(8, 8), always_apply=False, p=0.5),
A.ShiftScaleRotate(shift_limit=0.05, scale_limit=0.05, rotate_limit=15, p=0.5),
A.RGBShift(r_shift_limit=15, g_shift_limit=15, b_shift_limit=15, p=0.5),
A.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
ToTensorV2(),]),
"val": A.Compose([
A.Resize(256,256),
A.CenterCrop(224,224),
A.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225]),
ToTensorV2()])}
I got this error:
KeyError: Caught KeyError in DataLoader worker process 0.
KeyError: 'You have to pass data to augmentations as named arguments, for example: aug(image=image)'
| This Albumentations function takes a positional argument 'image' and returns a dictionnary. This is a sample to use it :
transforms = A.Compose([
A.augmentations.geometric.rotate.Rotate(limit=15,p=0.5),
A.Perspective(scale=[0,0.1],keep_size=False,fit_output=False,p=1),
A.Resize(224, 224),
A.HorizontalFlip(p=0.5),
A.GaussNoise(var_limit=(10.0, 50.0), mean=0),
A.RandomToneCurve(scale=0.5,p=1),
A.Normalize(mean=[0.5, 0.5, 0.5],std=[0.225, 0.225, 0.225]),
ToTensorV2()
])
img = cv2.imread("dog.png")
img = cv2.cvtColor(img,cv2.COLOR_BGR2RGB)
transformed_img = transforms(image=img)["image"]
| https://stackoverflow.com/questions/71476099/ |
Why I get "RuntimeError: CUDA error: the launch timed out and was terminated" when using Google Cloud compute engine | I have a Google cloud compute engine with 4 Nvidia K80 GPU and Ubuntu 20.04 (python 3.8). When I try to train the yolo5 model, I get the following error:
RuntimeError: CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
[W CUDAGuardImpl.h:113] Warning: CUDA warning: the launch timed out and was terminated (function destroyEvent)
terminate called after throwing an instance of 'c10::CUDAError'
what(): CUDA error: the launch timed out and was terminated
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
Exception raised from create_event_internal at ../c10/cuda/CUDACachingAllocator.cpp:1230 (most recent call first):
frame #0: c10::Error::Error(c10::SourceLocation, std::string) + 0x42 (0x7f62be2c17d2 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #1: <unknown function> + 0x239de (0x7f62f6ea69de in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #2: c10::cuda::CUDACachingAllocator::raw_delete(void*) + 0x22d (0x7f62f6ea857d in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libc10_cuda.so)
frame #3: <unknown function> + 0x300568 (0x7f63736d9568 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #4: c10::TensorImpl::release_resources() + 0x175 (0x7f62be2aa005 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libc10.so)
frame #5: std::vector<c10d::Reducer::Bucket, std::allocator<c10d::Reducer::Bucket> >::~vector() + 0x2e9 (0x7f62fa8ca5e9 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #6: c10d::Reducer::~Reducer() + 0x205 (0x7f62fa8bcd25 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_cpu.so)
frame #7: std::_Sp_counted_ptr<c10d::Reducer*, (__gnu_cxx::_Lock_policy)2>::_M_dispose() + 0x12 (0x7f6373bb7212 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #8: std::_Sp_counted_base<(__gnu_cxx::_Lock_policy)2>::_M_release() + 0x46 (0x7f63735c7506 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #9: <unknown function> + 0x7e182f (0x7f6373bba82f in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #10: <unknown function> + 0x1f5b20 (0x7f63735ceb20 in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #11: <unknown function> + 0x1f6cce (0x7f63735cfcce in /home/cheyuxuanll/.local/lib/python3.8/site-packages/torch/lib/libtorch_python.so)
frame #12: /usr/bin/python3() [0x5d1ec4]
frame #13: /usr/bin/python3() [0x5a958d]
frame #14: /usr/bin/python3() [0x5ed1a0]
frame #15: /usr/bin/python3() [0x544188]
frame #16: /usr/bin/python3() [0x5441da]
frame #17: /usr/bin/python3() [0x5441da]
frame #18: PyDict_SetItemString + 0x538 (0x5ce7c8 in /usr/bin/python3)
frame #19: PyImport_Cleanup + 0x79 (0x685179 in /usr/bin/python3)
frame #20: Py_FinalizeEx + 0x7f (0x68040f in /usr/bin/python3)
frame #21: Py_RunMain + 0x32d (0x6b7a1d in /usr/bin/python3)
frame #22: Py_BytesMain + 0x2d (0x6b7c8d in /usr/bin/python3)
frame #23: __libc_start_main + 0xf3 (0x7f6378be40b3 in /lib/x86_64-linux-gnu/libc.so.6)
frame #24: _start + 0x2e (0x5fb12e in /usr/bin/python3)
I am training this model with this command:
python3 -m torch.distributed.run --nproc_per_node 4 train.py --batch 16 --data coco128.yaml --weights yolov5s.pt --device 0,1,2,3
Am I missing something here?
Thanks
| We are also running CUDA in the Google Cloud and our server restarted roughly when you posted your question. While we couldn't detect any changes, our service couldn't start due to "RuntimeError: No CUDA GPUs are available".
So there are some similarities, but also some differences.
Anyway, we opted for the good ol' uninstall and reinstall and that fixed it:
Uninstall:
sudo apt-get --purge remove "*cublas*" "cuda*" "nsight*"
sudo apt-get --purge remove "*nvidia*"
Plus deleting anything in /usr/local/*cuda*
Install:
sudo apt-key adv --fetch-keys https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/7fa2af80.pub
sudo add-apt-repository "deb https://developer.download.nvidia.com/compute/cuda/repos/debian11/x86_64/ /"
sudo add-apt-repository contrib
sudo apt-get update
sudo apt-get -y install cuda-11-3
We also reinstalled CUDNN, but that may or may not be part of your stack.
| https://stackoverflow.com/questions/71491932/ |
can't import torchtext.legacy.data |
as i know, from torchtext 0.9.0, torchtext.data and torchtext.dataset are moved to torchtext.legacy
but my 0.12.0 torchtext can't import torchtext.legacy
while it can import torchtext.data
I tried if it moved to torchtext.data again but I can't find any document
torch.version == 1.11.0
| I also faced the same problem wtih the same versions. The only thing I was able to do about it is to install previous version of torchtext:
pip install torchtext==0.6.0
Only then was I wable to import the packs.
| https://stackoverflow.com/questions/71493451/ |
Pytorch RuntimeError: CUDA out of memory with a huge amount of free memory | While training the model, I encountered the following problem:
RuntimeError: CUDA out of memory. Tried to allocate 304.00 MiB (GPU 0; 8.00 GiB total capacity; 142.76 MiB already allocated; 6.32 GiB free; 158.00 MiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
As we can see, the error occurs when trying to allocate 304 MiB of memory, while 6.32 GiB is free! What is the problem? As I can see, the suggested option is to set max_split_size_mb to avoid fragmentation. Will it help and how to do it correctly?
This is my version of PyTorch:
torch==1.10.2+cu113
torchvision==0.11.3+cu113
torchaudio===0.10.2+cu113
| I tried hours til i found out:
to reduce the batch size
and the resize my input image image size
| https://stackoverflow.com/questions/71498324/ |
How to train MLM model XLM Roberta large on google machine specs fastly with less memory | I am fine tuning masked language model from XLM Roberta large on google machine specs.
I made couple of experiments and was strange to see few results.
"a2-highgpu-4g" ,accelerator_count=4, accelerator_type="NVIDIA_TESLA_A100" on 4,12,672 data batch size 4 Running ( 4 data*4 GPU=16 data points)
"a2-highgpu-4g" ,accelerator_count=4 , accelerator_type="NVIDIA_TESLA_A100"on 4,12,672 data batch size 8 failed
"a2-highgpu-4g" ,accelerator_count=4, accelerator_type="NVIDIA_TESLA_A100" on 4,12,672 data batch size 16 failed
"a2-highgpu-4g" ,accelerator_count=4.,accelerator_type="NVIDIA_TESLA_A100" on 4,12,672 data batch size 32 failed
I was not able to train model with batch size more than 4 on # of GPU's. It stopped in mid-way.
Here is the code I am using.
training_args = tr.TrainingArguments(
# disable_tqdm=True,
output_dir='/home/pc/Bert_multilingual_exp_TCM/results_mlm_exp2',
overwrite_output_dir=True,
num_train_epochs=2,
per_device_train_batch_size=4,
# per_device_train_batch_size
# per_gpu_train_batch_size
prediction_loss_only=True
,save_strategy="no"
,run_name="MLM_Exp1"
,learning_rate=2e-5
,logging_dir='/home/pc/Bert_multilingual_exp_TCM/logs_mlm_exp1' # directory for storing logs
,logging_steps=40000
,logging_strategy='no'
)
trainer = tr.Trainer(
model=model,
args=training_args,
data_collator=data_collator,
train_dataset=train_data
)
My Questions
How can I train with larger batch size on a2-highgpu-4g machine?
Which parameters can I include in TrainingArguments so that training is fast and occupies small memory?
Thanks in advance.
Versions
torch==1.11.0+cu113
torchvision==0.12.0+cu113
torchaudio==0.11.0+cu113
transformers==4.17.0
| I was facing a similar dilemma some days ago when I came across this enlightening article from Hugginface.
I did the experiments myself and could see the improvements in training. As I'm working with a Tesla T4, the following configuration allows me to resume training:
training_args = TrainingArguments(
output_dir=f'./{CN_MODEL_NAME}',
overwrite_output_dir=True,
num_train_epochs=2,
per_device_train_batch_size=8,
gradient_accumulation_steps=1,
gradient_checkpointing=True,
optim="adafactor",
save_steps=1000,
save_total_limit=1,
warmup_steps=1000,
weight_decay=0.01,
learning_rate=1e-5,
report_to=["wandb"],
logging_steps=500,
do_eval=False,
fp16=True
)
Anything greater than that led me to the dreaded CUDA Out of memory.
Hope it works for you too.
| https://stackoverflow.com/questions/71500193/ |
Why, using Huggingface Trainer, single GPU training is faster than 2 GPUs? | I have a VM with 2 V100s and I am training gpt2-like models (same architecture, fewer layers) using the really nice Trainer API from Huggingface. I am using the pytorch back-end.
I am observing that when I train the exact same model (6 layers, ~82M parameters) with exactly the same data and TrainingArguments, training on a single GPU training is significantly faster than on 2GPUs: ~5hrs vs ~6.5hrs.
How would one debug this kind of issue to uderstand what's causing the slowdown?
Extra notes:
the 2 gpus are both being used (watching nvidia-smi output)
I am using fp16 precision
My TrainingArguments values are:
{
"optim": "adamw_torch",
"evaluation_strategy": "epoch",
"save_strategy": "epoch",
"fp16": true,
"gradient_checkpointing": true,
"per_device_train_batch_size": 16,
"per_device_eval_batch_size": 16,
"dataloader_num_workers": 4,
"dataloader_pin_memory": true,
"gradient_accumulation_steps": 1,
"num_train_epochs": 5
}
The output of nvidia-smi topo -m is:
$ nvidia-smi topo -m
GPU0 GPU1 CPU Affinity NUMA Affinity
GPU0 X SYS 0-11 N/A
GPU1 SYS X 0-11 N/A
I understand that without NVLink inter-gpu communication is not as fast as it could be, but can that be the only cause of a slowdown like the one I'm observing? And if so, is there anything I can do or will I always have slower training times on 2GPUs (thus making multi-gpu training essentially useless)?
| Keeping this here for reference. The cause was "gradient_checkpointing": true,. The slowdown induced by gradient checkpointing appears to be larger on 2 GPUs than on a single GPU. I don't really know the cause of this issue, if anyone knows I would really appreaciate someone telling me.
| https://stackoverflow.com/questions/71500386/ |
pip does not find the cudatoolkit that conda has installed | I'm trying to install torch_scatter with pip. However it gives me an error message:
File "/home1/huangjiawei/miniconda3/envs/lin/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 404, in build_extensions
self._check_cuda_version()
File "/home1/huangjiawei/miniconda3/envs/lin/lib/python3.8/site-packages/torch/utils/cpp_extension.py", line 781, in _check_cuda_version
raise RuntimeError(CUDA_MISMATCH_MESSAGE.format(cuda_str_version, torch.version.cuda))
RuntimeError:
The detected CUDA version (9.0) mismatches the version that was used to compile
PyTorch (11.3). Please make sure to use the same CUDA versions.
But i did install cudatoolkit by conda:
(lin) huangjiawei@ai-server-2:~/linzhijie_Weakly-supervised-Query-based-Video-Segmentation$ conda list|grep cuda
cudatoolkit 11.3.1 h2bc3f7f_2 defaults
pytorch 1.10.2 py3.8_cuda11.3_cudnn8.2.0_0 https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
pytorch-mutex 1.0 cuda https://mirrors.tuna.tsinghua.edu.cn/anaconda/cloud/pytorch
So it seems that pip only detected the cuda version of the sever,but didn't detected the cuda version in my enviroment.
How to fix it?
| This error is complaining that your system CUDA compiler (nvcc) version doesn't match. cudatoolkit you installed in conda is CUDA runtime. These two are different components.
To install CUDA compiler, you need to install the CUDA toolkit from NVIDIA
| https://stackoverflow.com/questions/71502107/ |
Pytorch loss is nan | I'm trying to write my first neural network with pytorch.
Unfortunately, I encounter a problem when I want to get the loss.
The following error message:
RuntimeError: Function 'LogSoftmaxBackward0' returned nan values in its 0th output.
So I tried debugging and found something strange.
The input has no nans and infs as I verify with the following:
print(torch.any(torch.isnan(inputs)))
But if I always let the individual steps in the model x be output, I see that there will be inf at some point.
training
inputs, labels = data
print(torch.any(torch.isnan(inputs)))
optimizer.zero_grad()
outputs = model(inputs)
print(outputs)
loss = criterion(outputs, labels)
print(f"epoch: {epoch + 1} loss: {loss.item()}")
loss.backward()
optimizer.step()
model
class Net(Module):
def __init__(self):
super(Net, self).__init__()
self.layer1 = Conv1d(in_channels=1, out_channels=5, kernel_size=5, stride=2, dtype=torch.float64)
self.act1 = ReLU()
self.pool1 = MaxPool1d(2)
self.layer2 = Conv1d(in_channels=5, out_channels=1, kernel_size=2, dtype=torch.float64)
self.fcl1 = Linear(1350, 16, dtype=torch.float64)
def forward(self, x):
print("raw", x)
x = self.layer1(x)
print("conv1d 1", x)
x = self.act1(x)
print("relu", x)
x = self.layer2(x)
print("conv1d 2", x)
x = self.pool1(x)
x = self.pool1(x)
x = self.pool1(x)
x = self.pool1(x)
x = self.pool1(x)
x = self.pool1(x)
x = self.pool1(x)
print("pools", x)
x = self.fcl1(x)
print("linear", x)
return x
output
tensor(False)
raw tensor([[9.0616e+227, 2.4353e-152, 1.0294e-71, ..., 0.0000e+00,
0.0000e+00, 0.0000e+00]], dtype=torch.float64)
conv1d 1 tensor([[ -inf, -inf, -inf, ..., -0.2516, -0.2516, -0.2516],
[ inf, inf, inf, ..., 0.3377, 0.3377, 0.3377],
[ -inf, -inf, -inf, ..., 0.4285, 0.4285, 0.4285],
[ -inf, -inf, -inf, ..., -0.1230, -0.1230, -0.1230],
[ inf, inf, inf, ..., 0.3793, 0.3793, 0.3793]],
dtype=torch.float64, grad_fn=<SqueezeBackward1>)
relu tensor([[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ inf, inf, inf, ..., 0.3377, 0.3377, 0.3377],
[0.0000, 0.0000, 0.0000, ..., 0.4285, 0.4285, 0.4285],
[0.0000, 0.0000, 0.0000, ..., 0.0000, 0.0000, 0.0000],
[ inf, inf, inf, ..., 0.3793, 0.3793, 0.3793]],
dtype=torch.float64, grad_fn=<ReluBackward0>)
conv1d 2 tensor([[ -inf, -inf, -inf, ..., -5.4167e+265,
-5.4167e+265, -5.4167e+265]], dtype=torch.float64,
grad_fn=<SqueezeBackward1>)
pools tensor([[ -inf, -5.4167e+265, -5.4167e+265, ..., -5.4167e+265,
-5.4167e+265, -5.4167e+265]], dtype=torch.float64,
grad_fn=<SqueezeBackward1>)
linear tensor([[inf, inf, -inf, -inf, -inf, inf, inf, inf, inf, inf, inf, -inf, inf, inf, -inf, -inf]],
dtype=torch.float64, grad_fn=<AddmmBackward0>)
tensor([[inf, inf, -inf, -inf, -inf, inf, inf, inf, inf, inf, inf, -inf, inf, inf, -inf, -inf]],
dtype=torch.float64, grad_fn=<AddmmBackward0>)
epoch: 1 loss: nan
Thanks for helping
| Sorry, my reputation is not enough for me to comment directly. This may be caused by the exploding gradient due to the excessive learning rate. It is recommended that you reduce the learning rate or use weight_decay.
| https://stackoverflow.com/questions/71503683/ |
An error with Omegaconf when running continuous image generation code | I found this author's PiggybackGAN code on Github (about continuous learning image generation)
The link below: https://github.com/kaushik333/Piggyback-GAN-Pytorch
The github issue has this problem,but no one has solved.
I want to run this code in my Linux environment. After configuring the environment and data set, I get the following error:
initialize network with normal
initialize network with normal
initialize network with normal
initialize network with normal
Length of loader is 10
learning rate 0.0002000 -> 0.0002000
save image!
Length of loader is 10
learning rate 0.0002000 -> 0.0002000
save image!
...
...
learning rate 0.0000040 -> 0.0000020
save image!
Length of loader is 10
learning rate 0.0000020 -> 0.0000000
save image!
Traceback (most recent call last):
File "/opt/data/private/Pig/Piggyback-GAN-Pytorch-main/pb_cycleGAN.py", line 231, in main
mp.spawn(train, nprocs=len(opt.gpu_ids), args=(opt,))
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 171, in spawn
while not spawn_context.join():
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 118, in join
raise Exception(msg)
Exception:
-- Process 0 terminated with the following error:
Traceback (most recent call last):
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/torch/multiprocessing/spawn.py", line 19, in _wrap
fn(i, *args)
File "/opt/data/private/Pig/Piggyback-GAN-Pytorch-main/pb_cycleGAN.py", line 88, in train
opt.netG_A_filter_list.append([layer.unc_filt.detach().cpu()])
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/listconfig.py", line 228, in append
self._format_and_raise(key=index, value=item, cause=e)
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/base.py", line 101, in _format_and_raise
type_override=type_override,
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/listconfig.py", line 224, in append
parent=self,
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/omegaconf.py", line 770, in _maybe_wrap
ref_type=ref_type,
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/omegaconf.py", line 714, in _node_wrap
ref_type=ref_type,
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/listconfig.py", line 68, in __init__
format_and_raise(node=None, key=None, value=None, cause=ex, msg=str(ex))
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/_utils.py", line 629, in format_and_raise
_raise(ex, cause)
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/listconfig.py", line 66, in __init__
self._set_value(value=content)
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/listconfig.py", line 521, in _set_value
self.append(item)
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/listconfig.py", line 228, in append
self._format_and_raise(key=index, value=item, cause=e)
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/base.py", line 101, in _format_and_raise
type_override=type_override,
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/_utils.py", line 694, in format_and_raise
_raise(ex, cause)
File "/root/anaconda3/envs/PiggybackGAN/lib/python3.6/site-packages/omegaconf/_utils.py", line 610, in _raise
raise ex # set end OC_CAUSE=1 for full backtrace
omegaconf.errors.UnsupportedValueType: Value 'Tensor' is not a supported primitive type
full_key: netG_A_filter_list[0][0]
reference_type=Optional[List[Any]]
object_type=list
Set the environment variable HYDRA_FULL_ERROR=1 for a complete stack trace.
Process finished with exit code 1
The information worth paying attention to is
File "/opt/data/private/Pig/Piggyback-GAN-Pytorch-main/pb_cycleGAN.py", line 88, in train
opt.netG_A_filter_list.append([layer.unc_filt.detach().cpu()])
omegaconf.errors.UnsupportedValueType: Value 'Tensor' is not a supported primitive type
full_key: netG_A_filter_list[0][0]
reference_type=Optional[List[Any]]
object_type=list
My GPU only has a single core, not the 4 Gpus of the original author. I found the relevant part in the source code and conducted some tests to eliminate the problem of type mismatch. In addition, the version change of Omegaconf cannot solve my problem. On the right, the type of [layer.unc_filt.detach().cpu()] is list[tensor[]]
about code image
I don't know how to solve this problem now. I don't know whether to modify the code or be affected by the process. Could someone please tell me what I should do?
| OmegaConf does not support assignment of non primitive types to the config. This have changed years ago.
There is a possibility that the author used a very old version of OmegaConf that did allow for this assignment, but based on his environment.yaml file he is using 2.0.6 which does not support it.
Contact the author of the code about this issue.
| https://stackoverflow.com/questions/71506444/ |
How to handle hidden-cell output of 2-layer LSTM in PyTorch? | I have made a network with a LSTM and a fully connected layer in PyTorch. I want to test how an increase in the LSTM layers affects my performance.
Say my input is (6, 9, 14), meaning batch size 6, sequence size 9, and feature size 14, and I'm working on a task that has 6 classes, so I expect a 6-element one-hot-encoded tensor as the prediction for a single sequence. The output of this network after the FC layer should be (6, 6), however, if I use 2 LSTM layers it becomes (12, 6).
I don't understand how I should handle the output of the LSTM layer to decrease the number of batches from [2 * batch_size] to [batch_size]. Also, I know I'm using the hidden state as the input to the FC layer, I want to try it this way for now.
Should I sum or concatenate every two batches or anything else?? Cheers!
def forward(self, x):
hidden_0 = torch.zeros((self.lstm_layers, x.size(0), self.hidden_size), dtype=torch.double, device=self.device)
cell_0 = torch.zeros((self.lstm_layers, x.size(0), self.hidden_size), dtype=torch.double, device=self.device)
y1, (hidden_1, cell_1) = self.lstm(x, (hidden_0, cell_0))
hidden_1 = hidden_1.view(-1, self.hidden_size)
y = self.linear(hidden_1)
return y
| The hidden state shape of a multi layer lstm is (layers, batch_size, hidden_size) see output LSTM. It contains the hidden state for each layer along the 0th dimension.
In your example you convert the shape into two dimensions here:
hidden_1 = hidden_1.view(-1, self.hidden_size)
this transforms the shape into (batch_size * layers, hidden_size).
What you would want to do is only use the hidden state of the last layer:
hidden = hidden_1[-1,:,:].view(-1, self.hidden_size) # (1, bs, hidden) -> (bs, hidden)
y = self.linear(hidden)
return y
| https://stackoverflow.com/questions/71508824/ |
TypeError: DataLoader found invalid type: |
TypeError: DataLoader found invalid type: <class 'numpy.ndarray'>
Hi everyone, I have encountered difficulties, I can't find a solution, please help.
The program encountered an error at the train_fn () function.
train.py
from sklearn.preprocessing import StandardScaler
import joblib
from tqdm import tqdm
import pandas as pd
import numpy as np
import torch_geometric.transforms as T
import torch
import torch.optim as optim
# from torch_geometric.data import DataLoader
from torch_geometric.loader import DataLoader
from model import *
from Constant import *
import os
print(os.getcwd())
# path = '/home/ktcodes/jktModel/data/a09'
path = './data/a09'
e2e_emb = joblib.load(f'{path}/e2e_emb.pkl.zip')
c2c_emb = joblib.load(f'{path}/c2c_emb.pkl.zip')
skill_prob = joblib.load(f'{path}/skill_prob.pkl.zip')
filtered_skill_prob = {}
channel = 10
for i, skill_id in enumerate(skill_prob.index):
if len(skill_prob[skill_id])>= channel:
filtered_skill_prob[skill_id] = skill_prob[skill_id]
joblib.dump(filtered_skill_prob, f'{path}/filtered_skill_prob.pkl.zip')
# normalization
scaler = StandardScaler()
all_c_v = []
for k,v in c2c_emb.items():
all_c_v.extend(list(v.numpy()))
all_c_v = scaler.fit_transform(np.array(all_c_v).reshape(-1,1))
all_c_v1 = {}
for i, (k,v) in enumerate(c2c_emb.items()):
all_c_v1[k] = all_c_v[i*10:(i+1)*10].reshape(-1,)
all_e_v = {}
for skill,qu_embs in e2e_emb.items():
q_num = qu_embs.shape[0]
temp_all_v = qu_embs.numpy().reshape(-1,)
temp_all_v = scaler.fit_transform(np.array(temp_all_v).reshape(-1,1))
all_e_v[skill] = temp_all_v.reshape(-1,10)
skill_emb = {}
for skill in tqdm(filtered_skill_prob.keys()):
temp_c = (np.array(all_c_v1[skill]))
temp_e = np.array(np.mean(all_e_v[skill], axis=0))
skill_emb[skill] = np.append(temp_c, temp_e)
prob_emb = {}
for skill in tqdm(filtered_skill_prob.keys()):
for i, prob in enumerate(filtered_skill_prob[skill]):
temp_c = (np.array(all_c_v1[skill]))
temp_e = (np.array(all_e_v[skill][i]))
new_emb = np.append(temp_c, temp_e)
if prob in prob_emb.keys():
prob_emb[prob] = np.row_stack((prob_emb[prob], new_emb)).squeeze().astype(np.int32)
# print(prob_emb[prob].shape)
else: prob_emb[prob] = new_emb
for prob in tqdm(prob_emb.keys()):
if len(prob_emb[prob].shape) > 1:
prob_emb[prob] = np.mean(prob_emb[prob], axis=0)
# Train/Test data
read_col = ['order_id', 'assignment_id', 'user_id', 'assistment_id', 'problem_id', 'correct',
'sequence_id', 'base_sequence_id', 'skill_id', 'skill_name', 'original']
target = 'correct'
# read in the data
df = pd.read_csv(f'{path}/skill_builder_data.csv', low_memory=False, encoding="ISO-8859-1")[read_col]
df = df.sort_values(['order_id', 'user_id'])
# delete empty skill_id
df = df.dropna(subset=['skill_id'])
df = df[~df['skill_id'].isin(['noskill'])]
df.skill_id = df.skill_id.astype('int')
print('After removing empty skill_id, records number %d' % len(df))
# delete scaffolding problems
df = df[df['original'].isin([1])]
print('After removing scaffolding problems, records number %d' % len(df))
#delete the users whose interaction number is less than min_inter_num
min_inter_num = 3
users = df.groupby(['user_id'], as_index=True)
delete_users = []
for u in users:
if len(u[1]) < min_inter_num:
delete_users.append(u[0])
print('deleted user number based min-inters %d' % len(delete_users))
df = df[~df['user_id'].isin(delete_users)]
df = df[['user_id', 'problem_id', 'skill_id', 'correct']]
print('After deleting some users, records number %d' % len(df))
# print('features: ', df['assistment_id'].unique(), df['answer_type'].unique())
df = df[df['skill_id'].isin(filtered_skill_prob.keys())]
df['skill_cat'] = df['skill_id'].astype('category').cat.codes
df['e_emb'] = df['problem_id'].apply(lambda r: prob_emb[r])
df['c_emb'] = df['skill_id'].apply(lambda r: skill_emb[r])
group_c = df[['user_id', 'c_emb', 'correct']].groupby('user_id').apply(lambda r: (np.array(r['c_emb'].tolist()).squeeze(), r['correct'].values))
train_group_c = group_c.sample(frac=0.8, random_state=2020)
test_group_c = group_c[~group_c.index.isin(train_group_c.index)]
joblib.dump(train_group_c, f'{path}/train_group_c.pkl.zip')
joblib.dump(test_group_c, f'{path}/test_group_c.pkl.zip')
# print(type(train_group_c))
# # print(train_group_c.values)
# userid = train_group_c.index
# print(userid)
# q, qa = train_group_c[userid[0]]
# print(q, qa)
train_dataset = DKTDataset(train_group_c, max_seq=MAX_SEQ)
train_dataloader = DataLoader(train_dataset, batch_size=BATCH_SIZE, shuffle=True)
valid_dataset = DKTDataset(test_group_c, max_seq=MAX_SEQ)
valid_dataloader = DataLoader(valid_dataset, batch_size=BATCH_SIZE, shuffle=True)
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
model = DKT(input_dim, hidden_dim, layer_dim, output_dim, device)
optimizer = torch.optim.Adam(model.parameters(), lr=LEARNING_RATE)
criterion = nn.BCEWithLogitsLoss()
scheduler = torch.optim.lr_scheduler.OneCycleLR(
optimizer, max_lr=MAX_LEARNING_RATE, steps_per_epoch=len(train_dataloader), epochs=EPOCHS
)
model.to(device)
criterion.to(device)
for epoch in (range(EPOCHS)):
# there
loss, acc, auc = train_fn(model, train_dataloader, optimizer, criterion, device)
# print("epoch - {}/{} train: - {:.3f} acc - {:.3f} auc - {:.3f}".format(epoch+1, EPOCHS, loss, acc, auc))
loss, acc, pre, rec, f1, auc = valid_fn(model, valid_dataloader, criterion, device)
res = "epoch - {}/{} valid: - {:.3f} acc - {:.3f} pre - {:.3f} rec - {:.3f} f1 - {:3f} auc - {:.3f}".format(epoch+1, EPOCHS, loss, acc, pre, rec, f1, auc)
print(res)
The program does not go to this function:
def train_fn(model, dataloader, optimizer, criterion, scheduler=None, device="cpu"):
print('enter...')
print("dataloader", type(dataloader))
model.train()
train_loss = []
num_corrects = 0
num_total = 0
labels = []
outs = []
for x_emb, q_next, y in (dataloader):
x = x_emb.to(device).float()
y = y.to(device).float()
q_next = q_next.to(device).float()
out = model(x, q_next).squeeze().astype(np.int32)#[:, :-1]
loss = criterion(out, y)
loss.backward()
optimizer.step()
# scheduler.step()
train_loss.append(loss.item())
target_mask = (q_next!=0).unique(dim=2).squeeze().astype(np.int32)
# target_mask = (y!=-1)
filtered_out = torch.masked_select(out, target_mask)
filtered_label = torch.masked_select(y, target_mask)
filtered_pred = (torch.sigmoid(filtered_out) >= 0.5).long()
num_corrects = num_corrects + (filtered_pred == filtered_label).sum().item()
num_total = num_total + len(filtered_label)
labels.extend(filtered_label.view(-1).data.cpu().numpy())
outs.extend(filtered_pred.view(-1).data.cpu().numpy())
acc = num_corrects / num_total
auc = roc_auc_score(labels, outs)
loss = np.mean(train_loss)
return loss, acc, auc
Error info:
TypeError Traceback (most recent call last)
~/kt/jktModel/embedding_dkt.py in <module>
145 for epoch in (range(EPOCHS)):
146 print("ashkdgjggvnskaj")
--> 147 loss, acc, auc = train_fn(model, train_dataloader, optimizer, criterion, device)
148 # print("epoch - {}/{} train: - {:.3f} acc - {:.3f} auc - {:.3f}".format(epoch+1, EPOCHS, loss, acc, auc))
149 loss, acc, pre, rec, f1, auc = valid_fn(model, valid_dataloader, criterion, device)
~/kt/jktModel/model.py in train_fn(model, dataloader, optimizer, criterion, scheduler, device)
110 model.train()
111 train_loss = []
--> 112 num_corrects = 0
113 num_total = 0
114 labels = []
~/anaconda3/envs/dkt/lib/python3.8/site-packages/torch/utils/data/dataloader.py in __next__(self)
519 if self._sampler_iter is None:
520 self._reset()
--> 521 data = self._next_data()
522 self._num_yielded += 1
523 if self._dataset_kind == _DatasetKind.Iterable and \
~/anaconda3/envs/dkt/lib/python3.8/site-packages/torch/utils/data/dataloader.py in _next_data(self)
559 def _next_data(self):
560 index = self._next_index() # may raise StopIteration
--> 561 data = self._dataset_fetcher.fetch(index) # may raise StopIteration
562 if self._pin_memory:
563 data = _utils.pin_memory.pin_memory(data)
~/anaconda3/envs/dkt/lib/python3.8/site-packages/torch/utils/data/_utils/fetch.py in fetch(self, possibly_batched_index)
50 else:
51 data = self.dataset[possibly_batched_index]
---> 52 return self.collate_fn(data)
~/anaconda3/envs/dkt/lib/python3.8/site-packages/torch_geometric/loader/dataloader.py in __call__(self, batch)
32 return type(elem)(*(self(s) for s in zip(*batch)))
33 elif isinstance(elem, Sequence) and not isinstance(elem, str):
---> 34 return [self(s) for s in zip(*batch)]
35
36 raise TypeError(f'DataLoader found invalid type: {type(elem)}')
~/anaconda3/envs/dkt/lib/python3.8/site-packages/torch_geometric/loader/dataloader.py in <listcomp>(.0)
32 return type(elem)(*(self(s) for s in zip(*batch)))
33 elif isinstance(elem, Sequence) and not isinstance(elem, str):
---> 34 return [self(s) for s in zip(*batch)]
35
36 raise TypeError(f'DataLoader found invalid type: {type(elem)}')
~/anaconda3/envs/dkt/lib/python3.8/site-packages/torch_geometric/loader/dataloader.py in __call__(self, batch)
34 return [self(s) for s in zip(*batch)]
35
---> 36 raise TypeError(f'DataLoader found invalid type: {type(elem)}')
37
38 def collate(self, batch): # Deprecated...
TypeError: DataLoader found invalid type: <class 'numpy.ndarray'>
I have no ideas. This has already plagued me for a few days.
| Slightly late, but in case anyone else encounters a similar issue - torch_geometric.loader.DataLoader cannot handle numpy arrays, so you must convert them to torch.Tensor in your dataset first. Alternatively, you could pass a custom collate function to the dataloader.
In this particular case, your DKTDataset returns tuples x_emb, q_next, y, some of which are of type numpy.ndarray, so convert them to tensors in DKTDataset.__get__().
| https://stackoverflow.com/questions/71512763/ |
How to use gather() in python to return values at specific indices of a tensor | I have a tensor which looks like this:
tensor([[-0.0150, 0.1234],
[-0.0184, 0.1062],
[-0.0139, 0.1113],
[-0.0088, 0.0726]])
And another that looks like this:
tensor([[1.],
[1.],
[0.],
[0.]])
I want to return the values from the first tensor, for each row, that corresponds to the indice from the second tensor.
So our output would be:
tensor([0.1234], [0.1062], [-0.0139], [-0.0088]])
So far I have this code:
return torch.gather(tensor1, tensor2)
However I am getting the error:
TypeError: gather() received an invalid combination of arguments - got (Tensor, Tensor), but expected one of:
* (Tensor input, int dim, Tensor index, *, bool sparse_grad, Tensor out)
* (Tensor input, name dim, Tensor index, *, bool sparse_grad, Tensor out)
What am I doing wrong?
| You are missing the dim argument.
You can see an example here: https://pytorch.org/docs/stable/generated/torch.gather.html
For your case I think that return torch.gather(tensor1, 1, tensor2) should work
| https://stackoverflow.com/questions/71526425/ |
Fast GPU computation on PyTorch sparse tensor | Is it possible to do operations on each row of a PyTorch MxN tensor, but only at certain indices (for instance nonzero) to save time?
I'm particularly interested in the case of M and N very large where only a few elements on each row aren't null.
(Toy example) From this large tensor:
Large = Tensor([[0, 1, 3, 0, 0, 0],
[0, 0, 0, 0, 5, 0],
[1, 0, 0, 5, 0, 1]])
I'd like to use something like the following smaller "tensor":
irregular_tensor = [ [1, 3],
[5],
[1, 5, 1]]
and do the same exact computation on each row (for instance involving torch.cumsum and torch.exp) to obtain an output of size Mx1.
Is there a way to do that?
| You might be interested in the Torch Sparse functionality. You can convert a PyTorch Tensor to a PyTorch Sparse tensor using the to_sparse() method of the Tensor class.
You can then access a tensor that contains all the indices in Coordinate format by the Sparse Tensor's indices() method, and a tensor that contains the associated values by the Sparse Tensor's values() method.
This also has the benefit of saving you memory in terms of storing the tensor.
There is some functionality for using other Torch functions on Sparse Tensors, but this is limited.
Also: be aware that this part of the API is still in Beta and subject to changes.
| https://stackoverflow.com/questions/71531822/ |
Convert nn.Linear to nn.Conv1d | The format I want to output my model to doesn't support nn.Linear, so I'd like to change it to do the exact same thing but with nn.Conv1d.
My input is of shape (N, A, B) and I'd like to have a linear layer that transforms that into an output of shape (N, A, C). Previously, I was doing this with the layer nn.Linear(B, C). I'm able to produce working code that has the correct dimensions by doing
t1 = t1.transpose(1,2)
conv = nn.Conv1d(
in_channels=B,
out_channels=C,
kernel_size=1
)
t2 = conv(t1)
t2 = t2.transpose(1,2)
Is this functionally equivalent to doing t2 = nn.Linear(B,C)(t1)?
If so, is there a better/less verbose way of doing it?
| Yes this is essentially doing the same thing.
Instead of transposing you could just add a trailing dummy dimension by doing
t1 = t1.unsqueeze(-1)
...
t2 = t2.squeeze(-1)
This has the advantage that the data doesn't have to be reordered, but the effect is probably negligible.
| https://stackoverflow.com/questions/71532599/ |
open and read PT. file using python code by pytorch | I want to read a PT file with python and I don't know how, I want to open it with python
can you help me please, any ideas?
| If your .PT file is related to the weight and bias of a model.
You must first install pytorch in your pc.
for more information install go to this
then use this :
model = torch.load(PATH)
saving_loading_models
You could iterate the parameters to get all weight and bias params via:(see weitht and bias)
for param in model.parameters():
....
# or
for name, param in model.named_parameters():
...
| https://stackoverflow.com/questions/71533654/ |
How to evaluate a trained model in pytorch? | I have trained a model and save model using torch.save. Then after training I have loaded the model using train.load but I am getting this error
Traceback (most recent call last):
File "/home/fsdfs.py", line 219, in <module>
test(model, 'cuda', testloader)
File "/home/fsdfs.py", line 201, in test
model.eval()
AttributeError: 'collections.OrderedDict' object has no attribute 'eval'
Here is my code for test part
model = torch.load("train_5.pth")
def test(model, device, test_loader):
model.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
data, target = data.to('cuda'), target.to('cuda')
output = model(data)
#test_loss += f.cross_entropy(output, target, reduction='sum').item() # sum up batch loss
pred = output.argmax(1, keepdim=True) # get the index of the max log-probability
print(pred, target)
correct += pred.eq(target.view_as(pred)).sum().item()
test_loss /= len(test_loader.dataset)
print('\nTest set: Accuracy: {}/{} ({:.0f}%)\n'.format(
correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test(model, 'cuda', testloader)
I have commented training part of the code in the file, so in a way this and loading the data part is all that is there in the file now.
What am I doing wrong?
| Like @jodag has said. you probably have saved a state_dict instead of a model, which is recommended by the community as well.
This link explains the difference between two. To keep my answer self contained, I copy the snippet from the documentation. Here is the recommended way:
Save:
torch.save(model.state_dict(), PATH)
Load:
model = TheModelClass(*args, **kwargs)
model.load_state_dict(torch.load(PATH))
model.eval()
You could also save the entire model instead of saving the state_dict, if you really need to use the model the way you do.
Save:
torch.save(model, PATH)
Load:
# Model class must be defined somewhere
model = torch.load(PATH)
model.eval()
| https://stackoverflow.com/questions/71534943/ |
For pytorch RNN model, can we inference the input by its output results? | all,
I wonder, torch.nn.rnn(~). If I know the final output, does it possible, I could inference its input value? For example:
myrnn = nn.RNN(4, 2, 1, batch_first=True)
expected_out, hidden = myrnn(input)
expected_out: tensor([[[-0.7773, -0.2031]],
[[-0.4129, -0.1802]],
[[ 0.0599, -0.0151]],
[[-0.9273, 0.2683]],
[[ 0.6161, 0.5412]]])
Thank you so mcuh!!!
| What you are asking is theoretically impossible
Neural networks in general represent functions that are impossible to inverse as they are not guaranteed to be byjective regardless of the underlying architecture.
This means that neither rnn nor any other neural network are invertible.
| https://stackoverflow.com/questions/71538973/ |
Temporal Fusion Transformer (Pytorch Forecasting): `hidden_size` parameter | The Temporal-Fusion-Transformer (TFT) model in the PytorchForecasting package has several parameters (see: https://pytorch-forecasting.readthedocs.io/en/latest/_modules/pytorch_forecasting/models/temporal_fusion_transformer.html#TemporalFusionTransformer).
What does the hidden_size parameter exactly refer to? My best guess is that it refers to the number of neurons contained in the GRN component of the TFT. If so, in which layer are these neurons contained?
I found the documentation not really helpful in this case, since they describe the hidden_size parameter as: "hidden size of network which is its main hyperparameter and can range from 8 to 512"
Side note: part of my ignorance might be due to the fact that I am not fully familiar with the individual components of the TFT model.
| After a bit of research on the source code provided in the link, I was able to figure out how hidden_size is the main hyperparameter of the model. Here it is:
hidden_size describes indeed the number of neurons of each Dense layer of the GRN. You can check out the structure of the GRN at https://arxiv.org/pdf/1912.09363.pdf (page 6, Figure 2). Note that since the final layer of the GRN is just a normalization layer, also the output of the GRN has dimension hidden_size.
How is this the main hyperparameter of the model? By looking at the structure of the TFT model (on page 6 as well), the GRN unit appears in the Variable Selection process, in the Static Enrichment section and in the Position-wise Feed Forward section, so basically in every step of the learning process. Each one of these GRNs is built in the same way (only the input size varies).
| https://stackoverflow.com/questions/71555080/ |
How to correctly combine LSTM with Linear layer | I got and LSTM that gives me output (4,32,32) i pass it to the Linear Layer(hidden size of LSTM, num_classes=1) and it gives me an output shape (4,32,1). I am trying to solve a wake word model for my AI assistant.
I have 2 classes i want to predict from. 0 is not wake up and 1 is the wake up AI.
My batch size is 32. But the output is (4,32,1). Isnt it should be 32,1 or something like that so i will know that there is one prediction for 1 audio mfcc?
| Not quite. You need to reshape your data to (32, 1) or (1, 32) in order for your linear layer to work. You can achieve this by adding a dimension with torch.unsqueeze() or even directly with torch.view(). If you use the unsqueeze function, the new shape should be (32, 1). If you use the view function, the new shape should be (1, 32).
| https://stackoverflow.com/questions/71566905/ |
Pytorch dist.all_gather_object hangs | I'm using dist.all_gather_object (PyTorch version 1.8) to collect sample ids from all GPUs:
for batch in dataloader:
video_sns = batch["video_ids"]
logits = model(batch)
group_gather_vdnames = [None for _ in range(envs['nGPU'])]
group_gather_logits = [torch.zeros_like(logits) for _ in range(envs['nGPU'])]
dist.all_gather(group_gather_logits, logits)
dist.all_gather_object(group_gather_vdnames, video_sns)
The line dist.all_gather(group_gather_logits, logits) works properly,
but program hangs at line dist.all_gather_object(group_gather_vdnames, video_sns).
I wonder why the program hangs at dist.all_gather_object(), how can I fix it ?
EXTRA INFO:
I run my ddp code on a local machine with multiple GPUs. The start script is:
export NUM_NODES=1
export NUM_GPUS_PER_NODE=2
export NODE_RANK=0
export WORLD_SIZE=$(($NUM_NODES * $NUM_GPUS_PER_NODE))
python -m torch.distributed.launch \
--nproc_per_node=$NUM_GPUS_PER_NODE \
--nnodes=$NUM_NODES \
--node_rank $NODE_RANK \
main.py \
--my_args
| Turns out we need to set the device id manually as mentioned in the docstring of dist.all_gather_object() API.
Adding
torch.cuda.set_device(envs['LRANK']) # my local gpu_id
and the codes work.
I always thought the GPU ID is set automatically by PyTorch dist, turns out it's not.
| https://stackoverflow.com/questions/71568524/ |
Inspect signature of a python function without the __code__ attribute (e.g. PyTorch) | I am trying to determine the signature of a PyTorch function at runtime (e.g. torch.empty or torch.zeros). But something like inspect.signature(torch.empty) doesn't work here:
>>> import inspect
>>> import torch
>>> def add(a,b):
... return a+b
...
>>> inspect.signature(add)
<Signature (a, b)>
>>> inspect.signature(torch.empty)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/anaconda/envs/gc/lib/python3.8/inspect.py", line 3105, in signature
return Signature.from_callable(obj, follow_wrapped=follow_wrapped)
File "/home/anaconda/envs/gc/lib/python3.8/inspect.py", line 2854, in from_callable
return _signature_from_callable(obj, sigcls=cls,
File "/home/anaconda/envs/gc/lib/python3.8/inspect.py", line 2308, in _signature_from_callable
return _signature_from_builtin(sigcls, obj,
File "/home/anaconda/envs/gc/lib/python3.8/inspect.py", line 2119, in _signature_from_builtin
raise ValueError("no signature found for builtin {!r}".format(func))
ValueError: no signature found for builtin <built-in method empty of type object at 0x7f382e1321c0>
I am guessing the underlying reason to be the absence of the __code__ attribute
>>> add.__code__
<code object add at 0x7f382f3a9710, file "<stdin>", line 1>
>>> torch.empty.__code__
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
AttributeError: 'builtin_function_or_method' object has no attribute '__code__'
Is there any way to inspect the signatures of python functions in such cases?
| May be not best option, but workaround, - parse torch.empty.__doc__
print(torch.empty.__doc__)
empty(*size, *, out=None, dtype=None, layout=torch.strided, device=None, requires_grad=False, pin_memory=False, memory_format=torch.contiguous_format) -> Tensor
Returns a tensor filled with uninitialized data. The shape of the tensor is
defined by the variable argument :attr:`size`.
Args:
...
| https://stackoverflow.com/questions/71572847/ |
Pytorch AttributeError: can't set attribute | I'm using pytorch lightining and I have this error but I'm non really understanding what is the problem. I create a Deep Learning pipeline to run with hyperparameters searching and I think that the problem is in.
I omitted some part of the code because I think they are irrelevant for this issue (due to stackoverflow restrictions). Thanks for the help!
class ProtBertBFDClassifier(pl.LightningModule):
def __init__(self,hparams) -> None:
super(ProtBertBFDClassifier, self).__init__()
self.hparams = hparams
self.batch_size = self.hparams.batch_size
self.model_name = pretrained_model_name
self.dataset = Loc_dataset()
self.metric_acc = Accuracy()
# build model
self.__build_model()
# Loss criterion initialization.
self.__build_loss()
if self.hparams.nr_frozen_epochs > 0:
self.freeze_encoder()
else:
self._frozen = False
self.nr_frozen_epochs = self.hparams.nr_frozen_epochs
def __build_model(self) -> None:
""" Init BERT model + tokenizer + classification head."""
self.ProtBertBFD = BertModel.from_pretrained(self.model_name,gradient_checkpointing=self.hparams.gradient_checkpointing)
self.encoder_features = 1024
# Tokenizer
self.tokenizer = BertTokenizer.from_pretrained(self.model_name, do_lower_case=False)
# Label Encoder
self.label_encoder = LabelEncoder(
self.hparams.label_set.split(","), reserved_labels=[]
)
self.label_encoder.unknown_index = None
# Classification head
self.classification_head = nn.Sequential(
nn.Linear(self.encoder_features*4, self.label_encoder.vocab_size),
nn.Tanh(),
)
.....
def predict(self, sample: dict) -> dict:
""" Predict function.
:param sample: dictionary with the text we want to classify.
Returns:
Dictionary with the input text and the predicted label.
"""
......
def pool_strategy(self, features,
pool_cls=True, pool_max=True, pool_mean=True,
pool_mean_sqrt=True):
token_embeddings = features['token_embeddings']
cls_token = features['cls_token_embeddings']
attention_mask = features['attention_mask']
## Pooling strategy
output_vectors = []
if pool_cls:
output_vectors.append(cls_token)
if pool_max:
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
token_embeddings[input_mask_expanded == 0] = -1e9 # Set padding tokens to large negative value
max_over_time = torch.max(token_embeddings, 1)[0]
output_vectors.append(max_over_time)
if pool_mean or pool_mean_sqrt:
input_mask_expanded = attention_mask.unsqueeze(-1).expand(token_embeddings.size()).float()
sum_embeddings = torch.sum(token_embeddings * input_mask_expanded, 1)
#If tokens are weighted (by WordWeights layer), feature 'token_weights_sum' will be present
if 'token_weights_sum' in features:
sum_mask = features['token_weights_sum'].unsqueeze(-1).expand(sum_embeddings.size())
else:
sum_mask = input_mask_expanded.sum(1)
sum_mask = torch.clamp(sum_mask, min=1e-9)
if pool_mean:
output_vectors.append(sum_embeddings / sum_mask)
if pool_mean_sqrt:
output_vectors.append(sum_embeddings / torch.sqrt(sum_mask))
output_vector = torch.cat(output_vectors, 1)
return output_vector
........
inputs = self.tokenizer.batch_encode_plus(sample["seq"],
add_special_tokens=True,
padding=True,
truncation=True,
max_length=self.hparams.max_length)
if not prepare_target:
return inputs, {}
# Prepare target:
try:
targets = {"labels": self.label_encoder.batch_encode(sample["label"])}
return inputs, targets
except RuntimeError:
print(sample["label"])
raise Exception("Label encoder found an unknown label.")
......
def validation_step(self, batch: tuple, batch_nb: int, *args, **kwargs) -> dict:
""" Similar to the training step but with the model in eval mode.
Returns:
- dictionary passed to the validation_end function.
"""
inputs, targets = batch
model_out = self.forward(**inputs)
loss_val = self.loss(model_out, targets)
y = targets["labels"]
y_hat = model_out["logits"]
labels_hat = torch.argmax(y_hat, dim=1)
val_acc = self.metric_acc(labels_hat, y)
output = OrderedDict({"val_loss": loss_val, "val_acc": val_acc,})
return output
def validation_epoch_end(self, outputs: list) -> dict:
""" Function that takes as input a list of dictionaries returned by the validation_step
function and measures the model performance accross the entire validation set.
Returns:
- Dictionary with metrics to be added to the lightning logger.
"""
val_loss_mean = torch.stack([x['val_loss'] for x in outputs]).mean()
val_acc_mean = torch.stack([x['val_acc'] for x in outputs]).mean()
tqdm_dict = {"val_loss": val_loss_mean, "val_acc": val_acc_mean}
result = {
"progress_bar": tqdm_dict,
"log": tqdm_dict,
"val_loss": val_loss_mean,
}
return result
.......
def test_epoch_end(self, outputs: list) -> dict:
""" Function that takes as input a list of dictionaries returned by the validation_step
function and measures the model performance accross the entire validation set.
Returns:
- Dictionary with metrics to be added to the lightning logger.
"""
test_loss_mean = torch.stack([x['test_loss'] for x in outputs]).mean()
test_acc_mean = torch.stack([x['test_acc'] for x in outputs]).mean()
tqdm_dict = {"test_loss": test_loss_mean, "test_acc": test_acc_mean}
result = {
"progress_bar": tqdm_dict,
"log": tqdm_dict,
"test_loss": test_loss_mean,
}
return result
def configure_optimizers(self):
""" Sets different Learning rates for different parameter groups. """
parameters = [
{"params": self.classification_head.parameters()},
{
"params": self.ProtBertBFD.parameters(),
"lr": self.hparams.encoder_learning_rate,
},
]
optimizer = optim.Adam(parameters, lr=self.hparams.learning_rate)
return [optimizer], []
def __retrieve_dataset(self, train=True, val=True, test=True):
""" Retrieves task specific dataset """
if train:
return self.dataset.load_dataset(hparams.train_csv)
elif val:
return self.dataset.load_dataset(hparams.dev_csv)
elif test:
return self.dataset.load_dataset(hparams.test_csv)
else:
print('Incorrect dataset split')
def train_dataloader(self) -> DataLoader:
""" Function that loads the train set. """
self._train_dataset = self.__retrieve_dataset(val=False, test=False)
return DataLoader(
dataset=self._train_dataset,
sampler=RandomSampler(self._train_dataset),
batch_size=self.hparams.batch_size,
collate_fn=self.prepare_sample,
num_workers=self.hparams.loader_workers,
)
....
@classmethod
def add_model_specific_args(
cls, parser: HyperOptArgumentParser
) -> HyperOptArgumentParser:
""" Parser for Estimator specific arguments/hyperparameters.
:param parser: HyperOptArgumentParser obj
Returns:
- updated parser
"""
parser.opt_list(
"--max_length",
default=1536,
type=int,
help="Maximum sequence length.",
)
parser.add_argument(
"--encoder_learning_rate",
default=5e-06,
type=float,
help="Encoder specific learning rate.",
)
return parser
# these are project-wide arguments
parser = HyperOptArgumentParser(
strategy="random_search",
description="Minimalist ProtBERT Classifier",
add_help=True,
)
# Early Stopping
parser.add_argument(
"--monitor", default="val_acc", type=str, help="Quantity to monitor."
)
parser.add_argument(
"--metric_mode",
default="max",
type=str,
help="If we want to min/max the monitored quantity.",
choices=["auto", "min", "max"],
)
parser.add_argument(
"--patience",
default=5,
type=int,
help=(
"Number of epochs with no improvement "
"after which training will be stopped."
),
)
parser.add_argument(
"--accumulate_grad_batches",
default=32,
type=int,
help=(
"Accumulated gradients runs K small batches of size N before "
"doing a backwards pass."
),
)
# gpu/tpu args
parser.add_argument("--gpus", type=int, default=1, help="How many gpus")
parser.add_argument("--tpu_cores", type=int, default=None, help="How many tpus")
parser.add_argument(
"--val_percent_check",
default=1.0,
type=float,
help=(
"If you don't want to use the entire dev set (for debugging or "
"if it's huge), set how much of the dev set you want to use with this flag."
),
)
# each LightningModule defines arguments relevant to it
parser = ProtBertBFDClassifier.add_model_specific_args(parser)
hparams = parser.parse_known_args()[0]
"""
Main training routine specific for this project
:param hparams:
"""
seed_everything(hparams.seed)
# ------------------------
# 1 INIT LIGHTNING MODEL
# ------------------------
model = ProtBertBFDClassifier(hparams)
This is the error:
1 frames
<ipython-input-26-561494d91469> in __init__(self)
10 def __init__(self) -> None:
11 super(ProtBertBFDClassifier, self).__init__()
---> 12 self.hparams = parser.parse_known_args()[0]
13 self.batch_size = self.hparams.batch_size
14
/usr/local/lib/python3.7/dist-packages/torch/nn/modules/module.py in __setattr__(self, name, value)
1223 buffers[name] = value
1224 else:
-> 1225 object.__setattr__(self, name, value)
1226
1227 def __delattr__(self, name):
AttributeError: can't set attribute
| pip install pytorch-lightning==1.2.10
| https://stackoverflow.com/questions/71584409/ |
Find index where a sub-tensor does not equal to a given tensor in Pytorch | I have a tensor, for example,
a = [[15,30,0,2], [-1,-1,-1,-1], [10, 20, 40, 60], [-1,-1,-1,-1]]
which has the shape (4,4).
How can I find the index where a specific sub-tensor
[-1,-1,-1,-1]
that doesn't appear using PyTorch. The expected output I want to get is
[0,2]
| You can compare the elements for each row of the tensor using torch.any(), and then use .nonzero() and .flatten() to generate the indices:
torch.any(a != torch.Tensor([-1, -1, -1, -1]), axis=1).nonzero().flatten()
For example,
import torch
a = torch.Tensor([[15,30,0,2], [-1,-1,-1,-1], [10, 20, 40, 60], [-1,-1,-1,-1]])
result = torch.any(a != torch.Tensor([-1, -1, -1, -1]), axis=1).nonzero().flatten()
print(result)
outputs:
tensor([0, 2])
| https://stackoverflow.com/questions/71595684/ |
Reproducibility issue with PyTorch | I'm running a script with the same seed and I see results are reproduced on consecutive runs but somehow running the same script with the same seed changes the output after a few days. I'm only getting a short-term reproducibility which is weird. For reproducibility my script includes the following statements already:
torch.backends.cudnn.benchmark = False
torch.backends.cudnn.deterministic = True
torch.use_deterministic_algorithms(True)
random.seed(args.seed)
np.random.seed(args.seed)
torch.manual_seed(args.seed)
I also checked the sequence of instance ids created by the RandomSampler for train Dataloader which is maintained across runs. Also set the num_workers=0 in the dataloader.What could be causing the output to change?
| PyTorch is actually not fully deterministic. Meaning that with a set seed, some PyTorch operations will simply behave differently and diverge from previous runs, given enough time. This is due to algorithm, CUDA, and backprop optimizations.
This is a good read: https://pytorch.org/docs/stable/notes/randomness.html
The above page lists which operations are non-deterministic. It is generally discouraged that you disable their use, but it can be done with:
torch.use_deterministic_algorithms()
This might also disable which operation can be used.
| https://stackoverflow.com/questions/71600683/ |
ValueError: num_samples should be a positive integer value, but got num_samples=0 | I have data organized as follows: /dataset/train_or_validation/neg_or_pos_class/images.png
So, inside train or validation I have 2 folders, 1 for negative and 1 for positive.
I have the error of the title ValueError: num_samples should be a positive integer value, but got num_samples=0 because basically I am inside /dataset/train_or_validation, but then I need to access the folders neg or pos. Images are in this format: MCUCXR_0000_1.png for positive images, while MCUCXR_0000_0.png for negative class. I was thinking to extract all the images from the folders, in order to have /dataset/train_or_validation/images.png, but in this case how I can specify which is the class?
Or, how can I iterate through the positive/negative folders?
This is my code:
"""Montgomery Shard Descriptor."""
import logging
import os
from typing import List
from torchvision.datasets import ImageFolder
from torch.utils.data import DataLoader
from pathlib import Path
import numpy as np
import requests
from openfl.interface.interactive_api.shard_descriptor import ShardDataset
from openfl.interface.interactive_api.shard_descriptor import ShardDescriptor
from torchvision import transforms
# Compose transformations
train_transform = transforms.Compose([
transforms.RandomHorizontalFlip(),
transforms.RandomVerticalFlip(),
transforms.Resize((512, 512)),
transforms.ToTensor(),
])
test_transform = transforms.Compose([
transforms.Resize((512, 512)),
transforms.ToTensor(),
])
logger = logging.getLogger(__name__)
class MontgomeryShardDataset(ShardDataset):
"""Montgomery Shard dataset class."""
def __init__(self, dataset_dir: Path, dataset_type: str,):
"""Initialize MontgomeryDataset."""
self.data_type = dataset_type
self.dataset_dir = dataset_dir
print(self.dataset_dir)
self.imgs_path = list(dataset_dir.glob('*.png'))
def __getitem__(self, index: int):
"""Return an item by the index."""
img_path = self.imgs_path[index]
img = Image.open(img_path)
return img
def __len__(self):
"""Return the len of the dataset."""
return len(self.imgs_path)
class MontgomeryShardDescriptor(ShardDescriptor):
"""Montgomery Shard descriptor class."""
def __init__(
self,
data_folder: str = 'montgomery_data',
**kwargs
):
"""Initialize MontgomeryShardDescriptor."""
#print("Path at terminal when executing this file")
print(os.getcwd() + "\n")
#print(self.common_data_folder)
self.data_folder = data_folder
self.dataset_dir = Path.cwd() / data_folder
trainset, testset = self.get_data()
print("IO SONO" + "\n")
print(self.dataset_dir)
self.data_by_type = {
'train': self.dataset_dir / 'TRAIN',
'val': self.dataset_dir / 'TEST'
}
def get_shard_dataset_types(self) -> List[str]:
"""Get available shard dataset types."""
return list(self.data_by_type)
def get_dataset(self, dataset_type='train'):
"""Return a shard dataset by type."""
print("Path at terminal when executing this file")
print(os.getcwd() + "\n")
#os.chdir("/home/lmancuso/openfl/openfl-tutorials/interactive_api/OPENLAB/envoy")
if dataset_type not in self.data_by_type:
raise Exception(f'Wrong dataset type: {dataset_type}')
return MontgomeryShardDataset(
dataset_dir=self.data_by_type[dataset_type],
dataset_type=dataset_type,
)
@property
def sample_shape(self):
"""Return the sample shape info."""
return ['3', '512', '512']
@property
def target_shape(self):
"""Return the target shape info."""
return ['3', '512', '512']
@property
def dataset_description(self) -> str:
"""Return the dataset description."""
return (f'Montgomery dataset, shard number')
def get_data(self):
root_dir = "montgomery_data"
#train_set = ImageFolder(os.path.join(root_dir, "TRAIN"), transform=train_transform)
#test_set = ImageFolder(os.path.join(root_dir, "TEST"), transform=test_transform)
train_set = os.path.join(root_dir, "TRAIN")
test_set = os.path.join(root_dir, "TEST")
print('Montgomery data was loaded!')
return train_set, test_set
I am using the framework for Federated Learning developed by Intel, OpenFL.
As you can see I tried also to use ImageFolder because I think it can be useful in this case.
EDIT with the full traceback:
new_state[k] = pt.from_numpy(tensor_dict.pop(k)).to(device)
ERROR Collaborator failed with error: num_samples should be a positive integer value, but got num_samples=0: envoy.py:93
Traceback (most recent call last):
File "/home/lmancuso/openfl/openfl/component/envoy/envoy.py", line 91, in run
self._run_collaborator()
File "/home/lmancuso/openfl/openfl/component/envoy/envoy.py", line 164, in _run_collaborator
col.run()
File "/home/lmancuso/openfl/openfl/component/collaborator/collaborator.py", line 145, in run
self.do_task(task, round_number)
File "/home/lmancuso/openfl/openfl/component/collaborator/collaborator.py", line 259, in do_task
**kwargs)
File "/home/lmancuso/openfl/openfl/federated/task/task_runner.py", line 117, in collaborator_adapted_task
loader = self.data_loader.get_train_loader()
File "/tmp/ipykernel_8572/1777129341.py", line 35, in get_train_loader
File "/home/lmancuso/bruno/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 262, in __init__
sampler = RandomSampler(dataset, generator=generator) # type: ignore
File "/home/lmancuso/bruno/lib/python3.7/site-packages/torch/utils/data/sampler.py", line 104, in __init__
"value, but got num_samples={}".format(self.num_samples))
ValueError: num_samples should be a positive integer value, but got num_samples=0
INFO Send WaitExperiment request director_client.py:80
INFO WaitExperiment response has received director_client.py:82
| The problem is that the dataset is empty. The datapath may be wrong or preprocessing might be causing problems ending up with no object in Dataset object.
| https://stackoverflow.com/questions/71615089/ |
PytorchStreamReader failed reading zip archive: failed finding central directory | I am trying to learn pytorch from a book, but it seems not a straight line for me.
I coped the code below and pasted in my jupyter notebook for running but it gave me an error I am not able to interpret at my level!
from torchvision import models
model = models.alexnet(pretrained=True)
# set the device
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(f'Device: {device}')
model.eval()
model.to(device)
y = model(batch.to(device))
print(y.shape)
The error is as below
---------------------------------------------------------------------------
RuntimeError Traceback (most recent call last)
<ipython-input-50-03488007067c> in <module>
1 from torchvision import models
----> 2 model = models.alexnet(pretrained=True)
3
4 # set the device
5 device = 'cuda' if torch.cuda.is_available() else 'cpu'
~\anaconda3\lib\site-packages\torchvision\models\alexnet.py in alexnet(pretrained, progress, **kwargs)
61 model = AlexNet(**kwargs)
62 if pretrained:
---> 63 state_dict = load_state_dict_from_url(model_urls['alexnet'],
64 progress=progress)
65 model.load_state_dict(state_dict)
~\anaconda3\lib\site-packages\torch\hub.py in load_state_dict_from_url(url, model_dir, map_location, progress, check_hash, file_name)
555 if _is_legacy_zip_format(cached_file):
556 return _legacy_zip_load(cached_file, model_dir, map_location)
--> 557 return torch.load(cached_file, map_location=map_location)
~\anaconda3\lib\site-packages\torch\serialization.py in load(f, map_location, pickle_module, **pickle_load_args)
598 # reset back to the original position.
599 orig_position = opened_file.tell()
--> 600 with _open_zipfile_reader(opened_file) as opened_zipfile:
601 if _is_torchscript_zip(opened_zipfile):
602 warnings.warn("'torch.load' received a zip file that looks like a TorchScript archive"
~\anaconda3\lib\site-packages\torch\serialization.py in __init__(self, name_or_buffer)
240 class _open_zipfile_reader(_opener):
241 def __init__(self, name_or_buffer) -> None:
--> 242 super(_open_zipfile_reader, self).__init__(torch._C.PyTorchFileReader(name_or_buffer))
243
244
RuntimeError: PytorchStreamReader failed reading zip archive: failed finding central directory
someone help me understand this please.
Thank you.
| I think this issue happens when the file is not downloaded completely.
| https://stackoverflow.com/questions/71617570/ |
Monai : RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 7 but got size 8 for tensor number 1 in the list | I am using Monai for the 3D Multilabel segmentation task. My input image size is 512x496x49 and my label size is 512x496x49. An Image can have 3 labels in one image. With transform, I have converted the image in size 1x512x512x49 and Label in 3x512x512x49
My Transform
# Setting tranform for train and test data
a_min=6732
a_max=18732
train_transform = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys="image"),
ConvertToMultiChannelBasedOnBratsClassesd(keys="label"),
ScaleIntensityRanged(keys='image', a_min=a_min, a_max=a_max, b_min=0.0, b_max=1.0, clip=False),
Orientationd(keys=["image", "label"], axcodes="RAS"),
# Spacingd(keys=["image", "label"], pixdim=(
# 1.5, 1.5, 2.0), mode=("bilinear", "nearest")),
RandFlipd(keys=["image", "label"], prob=0.5, spatial_axis=0),
RandFlipd(keys=["image", "label"], prob=0.5, spatial_axis=1),
RandFlipd(keys=["image", "label"], prob=0.5, spatial_axis=2),
CropForegroundd(keys=["image", "label"], source_key="image"),
NormalizeIntensityd(keys="image", nonzero=True, channel_wise=True),
SpatialPadd(keys=['image', 'label'], spatial_size= [512, 512, 49]),# it will result in 512x512x49
EnsureTyped(keys=["image", "label"]),
]
)
val_transform = Compose(
[
LoadImaged(keys=["image", "label"]),
EnsureChannelFirstd(keys="image"),
ConvertToMultiChannelBasedOnBratsClassesd(keys="label"),
ScaleIntensityRanged(keys='image', a_min=a_min, a_max=a_max, b_min=0.0, b_max=1.0, clip=False),
Orientationd(keys=["image", "label"], axcodes="RAS"),
# Spacingd(keys=["image", "label"], pixdim=(
# 1.5, 1.5, 2.0), mode=("bilinear", "nearest")),
CropForegroundd(keys=["image", "label"], source_key="image"),
NormalizeIntensityd(keys="image", nonzero=True, channel_wise=True),
SpatialPadd(keys=['image', 'label'], spatial_size= [512, 512, 49]),# it will result in 512x512x49
EnsureTyped(keys=["image", "label"]),
]
)
Dataloader for training and val
train_ds = CacheDataset(data=train_files, transform=train_transform,cache_rate=1.0, num_workers=4)
train_loader = DataLoader(train_ds, batch_size=2, shuffle=True, num_workers=4,collate_fn=pad_list_data_collate)
val_ds = CacheDataset(data=val_files, transform=val_transform, cache_rate=1.0, num_workers=4)
val_loader = DataLoader(val_ds, batch_size=1, num_workers=4)
3D U-Net Network from Monai
# standard PyTorch program style: create UNet, DiceLoss and Adam optimizer
device = torch.device("cuda:0")
model = UNet(
spatial_dims=3,
in_channels=1,
out_channels=4,
channels=(16, 32, 64, 128, 256),
strides=(2, 2, 2, 2),
num_res_units=2,
norm=Norm.BATCH,
).to(device)
loss_function = DiceLoss(to_onehot_y=True, sigmoid=True)
optimizer = torch.optim.Adam(model.parameters(), 1e-4)
dice_metric = DiceMetric(include_background=True, reduction="mean")
Training
max_epochs = 5
val_interval = 2
best_metric = -1
best_metric_epoch = -1
epoch_loss_values = []
metric_values = []
post_pred = Compose([EnsureType(), AsDiscrete(argmax=True, to_onehot=4)])
post_label = Compose([EnsureType(), AsDiscrete(to_onehot=4)])
for epoch in range(max_epochs):
print("-" * 10)
print(f"epoch {epoch + 1}/{max_epochs}")
model.train()
epoch_loss = 0
step = 0
for batch_data in train_loader:
step += 1
inputs, labels = (
batch_data["image"].to(device),
batch_data["label"].to(device),
)
optimizer.zero_grad()
print("Size of inputs :", inputs.shape)
print("Size of inputs[0] :", inputs[0].shape)
# print("Size of inputs[1] :", inputs[1].shape)
# print("printing of inputs :", inputs)
outputs = model(inputs)
loss = loss_function(outputs, labels)
loss.backward()
optimizer.step()
epoch_loss += loss.item()
print(
f"{step}/{len(train_ds) // train_loader.batch_size}, "
f"train_loss: {loss.item():.4f}")
epoch_loss /= step
epoch_loss_values.append(epoch_loss)
print(f"epoch {epoch + 1} average loss: {epoch_loss:.4f}")
if (epoch + 1) % val_interval == 0:
model.eval()
with torch.no_grad():
for val_data in val_loader:
val_inputs, val_labels = (
val_data["image"].to(device),
val_data["label"].to(device),
)
roi_size = (160, 160, 160)
sw_batch_size = 4
val_outputs = sliding_window_inference(
val_inputs, roi_size, sw_batch_size, model)
val_outputs = [post_pred(i) for i in decollate_batch(val_outputs)]
val_labels = [post_label(i) for i in decollate_batch(val_labels)]
# compute metric for current iteration
dice_metric(y_pred=val_outputs, y=val_labels)
# aggregate the final mean dice result
metric = dice_metric.aggregate().item()
# reset the status for next validation round
dice_metric.reset()
metric_values.append(metric)
if metric > best_metric:
best_metric = metric
best_metric_epoch = epoch + 1
torch.save(model.state_dict(), os.path.join(
root_dir, "best_metric_model.pth"))
print("saved new best metric model")
print(
f"current epoch: {epoch + 1} current mean dice: {metric:.4f}"
f"\nbest mean dice: {best_metric:.4f} "
f"at epoch: {best_metric_epoch}"
)
While training I am getting this error
RuntimeError: Sizes of tensors must match except in dimension 1. Expected size 7 but got size 8 for tensor number 1 in the list.
I followed the 3D Segmentation Monai tutorial but this was only for 2 classes (including background) therefore I followed the discussion at https://github.com/Project-MONAI/MONAI/issues/415 but even though I changed what was recommended in this discussion still am getting errors while training.
| Your images have a depth of 49, but due to the 4 downsampling steps, each with stride 2, your images need to be divisible by a factor of 2**4=16. Adding in DivisiblePadd(["image", "label"], 16) should solve it.
| https://stackoverflow.com/questions/71618942/ |
Deep Smote error : RuntimeError: mat1 and mat2 shapes cannot be multiplied (51200x1 and 512x300) | I am trying to run deep Smote on cifar10 and Dont have much experience with pytorch as I code in tensorflow. It works fine when I run it on MNIST and FMNIST keeping channles = 1 there
However, the moment i try it on cifar10, i dosent behave well.
The code given in the paper says that it works for Cifar10 too,
All the help is appreciated
Here is the link to the source code of the paper
https://github.com/dd1github/DeepSMOTE
The source code is in Tensorflow, can someone pls help me here
RuntimeError Traceback (most recent call last)
C:\Users\RESEAR~1\AppData\Local\Temp/ipykernel_24844/1514724550.py in <module>
93
94 # run images
---> 95 z_hat = encoder(images)
96
97 x_hat = decoder(z_hat) #decoder outputs tanh
~\.conda\envs\pytorch\lib\site-packages\torch\nn\modules\module.py in _call_impl(self, *input, **kwargs)
1100 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1101 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1102 return forward_call(*input, **kwargs)
1103 # Do not call functions when jit is used
1104 full_backward_hooks, non_full_backward_hooks = [], []
~\.conda\envs\pytorch\lib\site-packages\torch\nn\functional.py in linear(input, weight, bias)
1846 if has_torch_function_variadic(input, weight, bias):
1847 return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias)
-> 1848 return torch._C._nn.linear(input, weight, bias)
1849
1850
RuntimeError: mat1 and mat2 shapes cannot be multiplied (51200x1 and 512x300)
Here's the code :
## create encoder model and decoder model
class Encoder(nn.Module):
def __init__(self, args):
super(Encoder, self).__init__()
self.n_channel = args['n_channel']
self.dim_h = args['dim_h']
self.n_z = args['n_z']
# convolutional filters, work excellent with image data
self.conv = nn.Sequential(
nn.Conv2d(self.n_channel, self.dim_h, 4, 2, 1, bias=False),
#nn.ReLU(True),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(self.dim_h, self.dim_h * 2, 4, 2, 1, bias=False),
nn.BatchNorm2d(self.dim_h * 2),
#nn.ReLU(True),
nn.LeakyReLU(0.2, inplace=True),
nn.Conv2d(self.dim_h * 2, self.dim_h * 4, 4, 2, 1, bias=False),
nn.BatchNorm2d(self.dim_h * 4),
#nn.ReLU(True),
nn.LeakyReLU(0.2, inplace=True),
# nn.Conv2d(self.dim_h * 4, self.dim_h * 8, 4, 2, 1, bias=False),
#3d and 32 by 32
nn.Conv2d(self.dim_h * 4, self.dim_h * 8, 4, 1, 0, bias=False),
nn.BatchNorm2d(self.dim_h * 8), # 40 X 8 = 320
#nn.ReLU(True),
nn.LeakyReLU(0.2, inplace=True) )#,
#nn.Conv2d(self.dim_h * 8, 1, 2, 1, 0, bias=False))
#nn.Conv2d(self.dim_h * 8, 1, 4, 1, 0, bias=False))
# final layer is fully connected
print("linearer >>>>>>>> ",self.dim_h * (2 ** 3), self.n_z)
self.fc = nn.Linear(self.dim_h * (2 ** 3), self.n_z)
print("leeeeeeeeeee ")
def forward(self, x):
#print('enc')
#print('input ',x.size()) #torch.Size([100, 3,32,32])
x = self.conv(x)
# x = x.squeeze()
# print('aft squeeze ',x.size()) #torch.Size([128, 320])
#aft squeeze torch.Size([100, 320])
x = self.fc(x)
#print('out ',x.size()) #torch.Size([128, 20])
#out torch.Size([100, 300])
return x
class Decoder(nn.Module):
def __init__(self, args):
super(Decoder, self).__init__()
self.n_channel = args['n_channel']
self.dim_h = args['dim_h']
self.n_z = args['n_z']
# first layer is fully connected
self.fc = nn.Sequential(
nn.Linear(self.n_z, self.dim_h * 8 * 7 * 7),
nn.ReLU())
# deconvolutional filters, essentially inverse of convolutional filters
self.deconv = nn.Sequential(
nn.ConvTranspose2d(self.dim_h * 8, self.dim_h * 4, 4),
nn.BatchNorm2d(self.dim_h * 4),
nn.ReLU(True),
nn.ConvTranspose2d(self.dim_h * 4, self.dim_h * 2, 4),
nn.BatchNorm2d(self.dim_h * 2),
nn.ReLU(True),
nn.ConvTranspose2d(self.dim_h * 2, 1, 4, stride=2),
#nn.Sigmoid())
nn.Tanh())
def forward(self, x):
#print('dec')
#print('input ',x.size())
x = self.fc(x)
x = x.view(-1, self.dim_h * 8, 7, 7)
x = self.deconv(x)
return x
..........
#NOTE: Download the training ('.../0_trn_img.txt') and label files
# ('.../0_trn_lab.txt'). Place the files in directories (e.g., ../MNIST/trn_img/
# and /MNIST/trn_lab/). Originally, when the code was written, it was for 5 fold
#cross validation and hence there were 5 files in each of the
#directories. Here, for illustration, we use only 1 training and 1 label
#file (e.g., '.../0_trn_img.txt' and '.../0_trn_lab.txt').
path = "C:/Users/antpc/Documents/saqib_smote/fmnist/"
path = "C:/Users/Research6/Desktop/smote experimentation/mnist/"
dtrnimg = (path+'/CBL_images')
dtrnlab = (path+'/CBL_labels')
ids = os.listdir(dtrnimg)
idtri_f = [os.path.join(dtrnimg, image_id) for image_id in ids]
print(idtri_f)
ids = os.listdir(dtrnlab)
idtrl_f = [os.path.join(dtrnlab, image_id) for image_id in ids]
print(idtrl_f)
#for i in range(5):
for i in range(len(ids)):
print()
print(i)
encoder = Encoder(args)
decoder = Decoder(args)
device = 'cuda' if torch.cuda.is_available() else 'cpu'
print(device)
decoder = decoder.to(device)
encoder = encoder.to(device)
train_on_gpu = torch.cuda.is_available()
#decoder loss function
criterion = nn.MSELoss()
criterion = criterion.to(device)
trnimgfile = idtri_f[i]
trnlabfile = idtrl_f[i]
print(trnimgfile)
print(trnlabfile)
dec_x = np.loadtxt(trnimgfile)
dec_y = np.loadtxt(trnlabfile)
print('train imgs before reshape ',dec_x.shape)
print('train labels ',dec_y.shape)
print(collections.Counter(dec_y))
# dec_x = dec_x.reshape(shape)
# dec_x = dec_x.permute(0, 4 1, 2, 3)
# dec_x = dec_x.reshape(shape[0],shape[3],shape[1],shape[2])
print("shape >>>>>>>>>>>>>> ",)
dec_x = dec_x.reshape(shape[0],3,32,32)
print('train imgs after reshape ',dec_x.shape)
batch_size = 100
num_workers = 0
#torch.Tensor returns float so if want long then use torch.tensor
tensor_x = torch.Tensor(dec_x)
tensor_y = torch.tensor(dec_y,dtype=torch.long)
mnist_bal = TensorDataset(tensor_x,tensor_y)
train_loader = torch.utils.data.DataLoader(mnist_bal,
batch_size=batch_size,shuffle=True,num_workers=num_workers)
best_loss = np.inf
t0 = time.time()
if args['train']:
enc_optim = torch.optim.Adam(encoder.parameters(), lr = args['lr'])
dec_optim = torch.optim.Adam(decoder.parameters(), lr = args['lr'])
for epoch in range(args['epochs']):
train_loss = 0.0
tmse_loss = 0.0
tdiscr_loss = 0.0
# train for one epoch -- set nets to train mode
encoder.train()
decoder.train()
for images,labs in train_loader:
# zero gradients for each batch
encoder.zero_grad()
decoder.zero_grad()
#print(images)
images, labs = images.to(device), labs.to(device)
#print('images ',images.size())
labsn = labs.detach().cpu().numpy()
#print('labsn ',labsn.shape, labsn)
# run images
z_hat = encoder(images)
x_hat = decoder(z_hat) #decoder outputs tanh
#print('xhat ', x_hat.size())
#print(x_hat)
mse = criterion(x_hat,images)
#print('mse ',mse)
resx = []
resy = []
tc = np.random.choice(10,1)
#tc = 9
xbeg = dec_x[dec_y == tc]
ybeg = dec_y[dec_y == tc]
xlen = len(xbeg)
nsamp = min(xlen, 100)
ind = np.random.choice(list(range(len(xbeg))),nsamp,replace=False)
xclass = xbeg[ind]
yclass = ybeg[ind]
xclen = len(xclass)
#print('xclen ',xclen)
xcminus = np.arange(1,xclen)
#print('minus ',xcminus.shape,xcminus)
xcplus = np.append(xcminus,0)
#print('xcplus ',xcplus)
xcnew = (xclass[[xcplus],:])
#xcnew = np.squeeze(xcnew)
xcnew = xcnew.reshape(xcnew.shape[1],xcnew.shape[2],xcnew.shape[3],xcnew.shape[4])
#print('xcnew ',xcnew.shape)
xcnew = torch.Tensor(xcnew)
xcnew = xcnew.to(device)
#encode xclass to feature space
xclass = torch.Tensor(xclass)
xclass = xclass.to(device)
xclass = encoder(xclass)
#print('xclass ',xclass.shape)
xclass = xclass.detach().cpu().numpy()
xc_enc = (xclass[[xcplus],:])
xc_enc = np.squeeze(xc_enc)
#print('xc enc ',xc_enc.shape)
xc_enc = torch.Tensor(xc_enc)
xc_enc = xc_enc.to(device)
ximg = decoder(xc_enc)
mse2 = criterion(ximg,xcnew)
comb_loss = mse2 + mse
comb_loss.backward()
enc_optim.step()
dec_optim.step()
train_loss += comb_loss.item()*images.size(0)
tmse_loss += mse.item()*images.size(0)
tdiscr_loss += mse2.item()*images.size(0)
# print avg training statistics
train_loss = train_loss/len(train_loader)
tmse_loss = tmse_loss/len(train_loader)
tdiscr_loss = tdiscr_loss/len(train_loader)
print('Epoch: {} \tTrain Loss: {:.6f} \tmse loss: {:.6f} \tmse2 loss: {:.6f}'.format(epoch,
train_loss,tmse_loss,tdiscr_loss))
#store the best encoder and decoder models
#here, /crs5 is a reference to 5 way cross validation, but is not
#necessary for illustration purposes
if train_loss < best_loss:
print('Saving..')
# path_enc = "C:\\Users\\Research6\\Desktop\\smote" + '\\bst_enc.pth'
# path_dec = "C:\\Users\\Research6\\Desktop\\smote" + '\\bst_dec.pth'
path_enc = path + '\\bst_enc.pth'
path_dec = path + '\\bst_dec.pth'
# path_enc = '/content/gdrive/My Drive/smote/' \
# + str(i) + '/bst_enc.pth'
# path_dec = '/content/gdrive/My Drive/smote/' \
# + str(i) + '/bst_dec.pth'
torch.save(encoder.state_dict(), path_enc)
torch.save(decoder.state_dict(), path_dec)
best_loss = train_loss
#in addition, store the final model (may not be the best) for
#informational purposes
path_enc = path + '\\f_enc.pth'
path_dec = path + '\\f_dec.pth'
print(path_enc)
print(path_dec)
torch.save(encoder.state_dict(), path_enc)
torch.save(decoder.state_dict(), path_dec)
print()
t1 = time.time()
print('total time(min): {:.2f}'.format((t1 - t0)/60))
t4 = time.time()
print('final time(min): {:.2f}'.format((t4 - t3)/60))
| I have the same issue on pytorch, can you try to uncomment the following line below inside nn.sequential() of decoder.
#3d and 32 by 32.
nn.Conv2d(self.dim_h * 4, self.dim_h * 8, 4, 1, 0, bias=False)
| https://stackoverflow.com/questions/71626362/ |
Convert list of PNGImageFile to array of array | I have a dataset organized in this way: /dataset/train/class/images.png (the same for test) (and I have 2 classes, positive and negative).
I want to obtain x_train, y_train, x_test and y_test, so I am using this python script:
x_train = []
y_train = []
x_test = []
y_test = []
base_dir_train = 'Montgomery_real_splitted/TRAIN/'
base_dir_test = 'Montgomery_real_splitted/TEST/'
for f in sorted(os.listdir(base_dir_train)):
if os.path.isdir(base_dir_train+f):
print(f"{f} is a target class")
for i in sorted(os.listdir(base_dir_train+f)):
y_train.append(f)
im = Image.open(base_dir_train+f+'/'+i)
x_train.append(im)
for f in sorted(os.listdir(base_dir_test)):
if os.path.isdir(base_dir_test+f):
print(f"{f} is a target class")
for i in sorted(os.listdir(base_dir_test+f)):
y_test.append(f)
imt=Image.open(base_dir_test+f+'/'+i)
x_test.append(imt)
y_train = np.array(y_train)
y_test = np.array(y_test)
Basically I obtain what I want, for example x_train is this:
[<PIL.PngImagePlugin.PngImageFile image mode=L size=4892x4020 at 0x10A98B280>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=4020x4892 at 0x10A98B040>,
...
<PIL.PngImagePlugin.PngImageFile image mode=L size=4020x4892 at 0x11BA5D940>,
<PIL.PngImagePlugin.PngImageFile image mode=L size=4020x4892 at 0x11BA5D9A0>]
And y_train is:
array(['neg', 'neg', 'neg', 'neg', 'neg', 'neg', 'neg', 'neg', 'neg',
...
'pos', 'pos', 'pos', 'pos', 'pos', 'pos', 'pos', 'pos', 'pos'],
dtype='<U3')
However, I want that x_train is in this format:
array([[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]],
...
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]], dtype=uint8)
How can I convert it?
EDIT for @Pyaive Oleg If I do im = np.array(im) then the result is this, which is different from the one that I want. The same for the tensor
[array([[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0],
...,
[ 5, 6, 7, ..., 14, 9, 5],
[ 4, 5, 6, ..., 12, 8, 4],
[ 0, 1, 2, ..., 3, 2, 0]], dtype=uint8),
array([[ 1, 1, 1, ..., 8, 246, 0],
[ 1, 1, 1, ..., 0, 7, 11],
[ 1, 1, 1, ..., 0, 0, 6],
...,
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0]], dtype=uint8),
...
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0]], dtype=uint8),
array([[ 0, 0, 0, ..., 0, 255, 1],
[ 0, 0, 0, ..., 0, 3, 11],
[ 0, 0, 0, ..., 2, 0, 7],
...,
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0]], dtype=uint8),
array([[ 1, 1, 1, ..., 19, 246, 0],
[ 1, 1, 1, ..., 0, 16, 0],
[ 1, 1, 1, ..., 2, 0, 12],
...,
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0],
[ 0, 0, 0, ..., 0, 0, 0]], dtype=uint8),
| Before appending imt to x_train, do this:
imt = np.array(imt)
The following also can help:
from torchvision import transforms
imt = transforms.ToTensor()(imt)
| https://stackoverflow.com/questions/71626985/ |
ResNet doesn't train because of differences in images' sizes | So ho I have 30 folders with images inside them, and I wanted to train ResNet50 on them. I created a CustomDataset, and inside I put a Resize(224, 224) so that every image had the same size.
Here's what I did:
class CustomImageDataset(Dataset):
def __init__(self, annotations_file, img_dir, transform=None, target_transform = None):
self.img_labels = pd.read_csv(annotations_file, sep= ';')
self.img_dir = img_dir
self.transform = transform
self.target_transform = target_transform
def __len__(self):
return len(self.img_labels)
def __getitem__(self, idx):
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0][0:9] + '.tar', self.img_labels.iloc[idx, 0])
image = read_image(img_path)
transf = transforms.Resize((224, 224))
image = transf(image)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
return image, label
The dataset works, however, when the network is trying to create the batch, at entry 271 (which I don't know how to plot to see the image) it raises this error:
Traceback (most recent call last):
File "final.py", line 235, in <module>
num_epochs = epochs, is_inception=(model_name== 'inception'))
File "final.py", line 111, in train_model
for input, labels in dataloaders[phase]:
File "/home/fdalligna/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 530, in __next__
data = self._next_data()
File "/home/fdalligna/.local/lib/python3.7/site-packages/torch/utils/data/dataloader.py", line 570, in _next_data
data = self._dataset_fetcher.fetch(index) # may raise StopIteration
File "/home/fdalligna/.local/lib/python3.7/site-packages/torch/utils/data/_utils/fetch.py", line 52, in fetch
return self.collate_fn(data)
File "/home/fdalligna/.local/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 172, in default_collate
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "/home/fdalligna/.local/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 172, in <listcomp>
return [default_collate(samples) for samples in transposed] # Backwards compatibility.
File "/home/fdalligna/.local/lib/python3.7/site-packages/torch/utils/data/_utils/collate.py", line 138, in default_collate
return torch.stack(batch, 0, out=out)
RuntimeError: stack expects each tensor to be equal size, but got [3, 224, 224] at entry 0 and [1, 224, 224] at entry 271
Does anyone by chance know how can I make all images of size [3, 224, 224]?
Thank you :)
| As noted in the comments, the error suggests that your dataset contains both gray scale and RGB (color) images. Although all images have indeed been resized to 224 pixels, color images have 3 channels, whereas gray scale images only have a single channel, so a batch cannot be created.
If you insist on training a network on this mixed dataset, you can either
Turn color images into gray scale
Modify gray scale images to have 3 channels to mimic RGB
From training a neural network point of view, the first option makes more sense. This can be achieved by averaging a color image across the RGB channel.
def __getitem__(self, idx):
# copied
img_path = os.path.join(self.img_dir, self.img_labels.iloc[idx, 0][0:9] + '.tar', self.img_labels.iloc[idx, 0])
image = read_image(img_path)
transf = transforms.Resize((224, 224))
image = transf(image)
label = self.img_labels.iloc[idx, 1]
if self.transform:
image = self.transform(image)
if self.target_transform:
label = self.target_transform(label)
# check if color image
if image.size(0) == 3:
# average across channels
image = image.mean(dim=0).unsqueeze(0)
return image, label
You should make sure that the input layer of the network expects mono-channel images.
If you want to choose option 2 instead, you can instead do
# check if gray scale image
if image.size(0) == 1:
# repeat color channel
image = image.repeat(3, 1, 1)
return image, label
In this case, the model would expect three input channels, i.e., nn.Conv2d(in_channels=3, out_channels, kernel_size, ...).
| https://stackoverflow.com/questions/71628697/ |
How to simplify function for standardising images? | I have a function to calculate the mean and standard deviation of my dataset.
Is there a simpler way to do this? As it takes a while to compute.
def get_mean_std(loader):
sum = 0
sum_sq_err = 0
for data, _ in loader:
sum += torch.mean(data, dim=[0,2,3])
sum_sq_err += torch.mean(data**2, dim=[0,2,3])
mean = sum/len(loader)
std = (sum_sq_err/(len(loader)) - mean**2)**0.5
return mean, std
| Note that this approach is not even correct in general, as the mean of a set is not the mean of the means of some subsets in general (it is when all the subsets have the same length, but that may or may not be the case here).
Provided that every batch is of the same size, what I would do is to call torch.sum in the loop, but rather than already accumulating it into a sum, appending it into a list, and then reducing it via torch.sum + a division afterwards. Note that torch.sum implements a highly non-trivial algorithm that is more precise in general than the naïve iterative sum.
| https://stackoverflow.com/questions/71631292/ |
how to package my python pytorch program for android? | my codes is python pytorch.
I build it to .exe, it can work at windows.
How to package the codes for android.
I hope it can work at android.
Thank you.
| Sorry, I don't know how to call an exe program in android, but I can give you some other advice. According to your needs, maybe you can learn about Chaquopy.
Chaquopy provides everything you need to include Python components in an Android app, including:
(1)Full integration with Android Studio’s standard Gradle build system.
(2)Simple APIs for calling Python code from Java/Kotlin, and vice versa.
(3)A wide range of third-party Python packages, including SciPy, OpenCV, TensorFlow and many more.
With Chaquopy, you can run python programs directly on Android. I have experience running sklearn programs on Android before. If you have any questions, you can continue to ask.
Chaquopy official website:https://chaquo.com/chaquopy/
| https://stackoverflow.com/questions/71634616/ |
Sagemaker inference : how to load model | I have trained a BERT model on sagemaker and now I want to get it ready for making predictions, i.e, inference.
I have used pytorch to train the model and model is saved to s3 bucket after training.
Here is structure inside model.tar.gz file which is present in s3 bucket.
Now, I do not understand how can I make predictions on it. I have tried to follow many guides but still could not understand.
Here is something which I have tried:
inference_image_uri = sagemaker.image_uris.retrieve(
framework='pytorch',
version='1.7.1',
instance_type=inference_instance_type,
region=aws_region,
py_version='py3',
image_scope='inference'
)
sm.create_model(
ModelName=model_name,
ExecutionRoleArn=role,
PrimaryContainer={
'ModelDataUrl': model_s3_dir,
'Image': inference_image_uri
}
)
sm.create_endpoint_config(
EndpointConfigName=endpoint_config_name,
ProductionVariants=[
{
"VariantName": "variant1", # The name of the production variant.
"ModelName": model_name,
"InstanceType": inference_instance_type, # Specify the compute instance type.
"InitialInstanceCount": 1 # Number of instances to launch initially.
}
]
)
sm.create_endpoint(
EndpointName=endpoint_name,
EndpointConfigName=endpoint_config_name
)
from sagemaker.predictor import Predictor
from sagemaker.serializers import JSONLinesSerializer
from sagemaker.deserializers import JSONLinesDeserializer
inputs = [
{"inputs": ["I have a question [EOT] Hey Manish Mittal ! I'm OneAssist bot. I'm here to answer your queries. [SEP] thanks"]},
# {"features": ["OK, but not great."]},
# {"features": ["This is not the right product."]},
]
predictor = Predictor(
endpoint_name=endpoint_name,
serializer=JSONLinesSerializer(),
deserializer=JSONLinesDeserializer(),
sagemaker_session=sess
)
predicted_classes = predictor.predict(inputs)
for predicted_class in predicted_classes:
print("Predicted class {} with probability {}".format(predicted_class['predicted_label'], predicted_class['probability']))
I can see the endpoint created but while predicting, its giving me error:
ModelError: An error occurred (ModelError) when calling the
InvokeEndpoint operation: Received server error (0) from primary with
message "Your invocation timed out while waiting for a response from
container primary. Review the latency metrics for each container in
Amazon CloudWatch, resolve the issue, and try again."
I do not understand how to make it work, and also, do I need to give any entry script to the inference, if yes where.
| Here's detailed documentation on deploying PyTorch models - https://sagemaker.readthedocs.io/en/stable/frameworks/pytorch/using_pytorch.html#deploy-pytorch-models
If you're using the default model_fn provided by the estimator, you'll need to have the model as model.pt.
To write your own inference script and deploy the model, see the section on Bring your own model. The pytorch_model.deploy function will deploy it to a real-time endpoint, and then you can use the predictor.predict function on the resulting endpoint variable.
| https://stackoverflow.com/questions/71637112/ |