-
Notifications
You must be signed in to change notification settings - Fork 150
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Training with my own data #48
Comments
Often we require that the image width and height can be divided by 64. Here 1024/64 = 16 is ok, but 542 / 64 = 8.4。 So I suggest that you cut the top border of image t to make the height as 8*64=512. Also, do not forget change the intrinsic parameters (c_y = c_y - offset_y). |
Thank you , it's works ,but new issue now , problem with nn.DataParallel RuntimeError: Expected tensor for argument #1 'input' to have the same device as tensor for argument #2 'weight'; but device 1 does not equal 0 (while checking arguments for cudnn_convolution). When i use only 1 card all good ,but it's impossible to train on one 2070 super |
this issue occurs at the decoder stage , at every network(dis,pose net) |
I suggest that you train model in one GPU, because the batchsize=4 is small. Also you can downsample your image to 1/2 resolution, i.e., 256x512. If you want to try Multi-GPU training, I suggest that you replace the DepthDecoder with the following parallel version. class DepthDecoder_parallel(nn.Module):
|
and replace the PoseDecoder with: class PoseDecoder_Parallel(nn.Module):
|
I think it is OrderedDict issue |
I have trouble with prepare my own data, can you show me your code on how to run prepare own data, Thanks! |
image resolution should be divided by 32。so you can change resolution to 512x1024 |
Hello,thanks for this awesome project. I have the strange issue. I prepared my own dataset with imaqes 542x1024 and when training starts i always get
N/A% (0 of 200) | | Elapsed Time: 0:00:00 ETA: --:--:--
N/A% (0 of 946) | | Elapsed Time: 0:00:00 ETA: --:--:--
[torch.Size([2, 256, 34, 64]), torch.Size([2, 256, 34, 64])]
[torch.Size([2, 128, 68, 128]), torch.Size([2, 128, 68, 128])]
[torch.Size([2, 64, 136, 256]), torch.Size([2, 64, 136, 256])]
[torch.Size([2, 32, 272, 512]), torch.Size([2, 64, 271, 512])]
Dimension error when torch.cat(x,1)
Maybe its a stride , padding issue , please help me
The text was updated successfully, but these errors were encountered: