You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for your excellent open-source work!
I’ve encountered a potential issue with the calculation of PSNR in newer versions of PyTorch, which I believe is caused by the in-place resize operations of non-contiguous tensors.
Specifically, notice that slice operations are performed before calling criterion_psnr, creating non-contiguous tensors:
Later, in the Loss_PSNR class, we use in-place operations with resize_:
Itrue = im_true.clamp(0., 1.).mul_(data_range).resize_(N, C * H * W)
Ifake = im_fake.clamp(0., 1.).mul_(data_range).resize_(N, C * H * W)
However, the torch.Tensor.resize_ method assumes the input tensor is contiguous, which may lead to unintended data access. Please see PyTorch 1.7.0 resize_:
Warning
This is a low-level method. The storage is reinterpreted as C-contiguous, ignoring the current strides (unless the target size equals the current size, in which case the tensor is left unchanged). For most purposes, you will instead want to use view(), which checks for contiguity, or reshape(), which copies data if needed. To change the size in-place with custom strides, see set_().
Therefore, the current implementation relies on the clamp operation to produce a contiguous output, which seems not to be guaranteed by PyTorch conventions.
A quick evaluation can be made like this:
assert (im_true.reshape(N, C * H * W) != im_true.clone().resize_(N, C * H * W)).any()
Maybe we can change the resize_ to reshape for robustness purpose?
Hello,
Thank you for your excellent open-source work!
I’ve encountered a potential issue with the calculation of PSNR in newer versions of PyTorch, which I believe is caused by the in-place resize operations of non-contiguous tensors.
Specifically, notice that slice operations are performed before calling
criterion_psnr
, creating non-contiguous tensors:Later, in the
Loss_PSNR
class, we use in-place operations withresize_
:However, the
torch.Tensor.resize_
method assumes the input tensor is contiguous, which may lead to unintended data access. Please see PyTorch 1.7.0 resize_:Therefore, the current implementation relies on the
clamp
operation to produce a contiguous output, which seems not to be guaranteed by PyTorch conventions.A quick evaluation can be made like this:
Maybe we can change the
resize_
toreshape
for robustness purpose?Possibly related issues: #4, #8, #29, #45.
Thanks!
The text was updated successfully, but these errors were encountered: