Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch warnings about .grad access - safe to ignore? #8

Closed
classner opened this issue Mar 18, 2024 · 2 comments
Closed

PyTorch warnings about .grad access - safe to ignore? #8

classner opened this issue Mar 18, 2024 · 2 comments

Comments

@classner
Copy link

Thanks for the amazing package! I just played around with it and found that PyTorch emits two warnings when using the package:

/path/to/python/lib/python3.10/site-packages/taichi/lang/kernel_impl.py:763: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.)
  if v.requires_grad and v.grad is None:
/path/to/python/lib/python3.10/site-packages/taichi_splatting/misc/autograd.py:8: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.)                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                                              
  grads = [tensor.grad if tensor.grad is not None else None

It seems like gradients are still propagated, but those warning are a little concerning - are they safe to ignore? Can you provide more context on what's happening here? The first one is related to a Taichi kernel launch and looking at the source probably okay. The second one refers to taichi_splatting.misc.autograd and I'm not sure what the restore_grad function is trying to do (I can't find an active reference).

@oliver-batchelor
Copy link
Contributor

oliver-batchelor commented Mar 18, 2024 via email

@classner
Copy link
Author

Thanks a lot for the quick response - the linked issue explains this in much more detail. Fingers crossed this will be fixed upstream in Taichi. :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants