-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
PyTorch warnings about .grad access - safe to ignore? #8
Comments
These are annoying warnings that arise from taichi tinkering with the
`.grad` value of a torch Tensor for it's autograd integration. There should be a better way for them to
handle the gradient integration. I think they shouldn't touch the `.grad`
value at all (for example, call it something else like `.taichi_grad` or
allow tuples of `(value, grad)` as input to their kernel.grad function)!
I have raised this with them here:
taichi-dev/taichi#8339
…On Mon, Mar 18, 2024 at 4:08 PM Christoph Lassner ***@***.***> wrote:
Thanks for the amazing package! I just played around with it and found
that PyTorch emits two warnings when using the package:
/path/to/python/lib/python3.10/site-packages/taichi/lang/kernel_impl.py:763: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.)
if v.requires_grad and v.grad is None:
/path/to/python/lib/python3.10/site-packages/taichi_splatting/misc/autograd.py:8: UserWarning: The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won't be populated during autograd.backward(). If you indeed want the .grad field to be populated for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations. (Triggered internally at aten/src/ATen/core/TensorBody.h:489.)
grads = [tensor.grad if tensor.grad is not None else None
It seems like gradients are still propagated, but those warning are a
little concerning - are they safe to ignore? Can you provide more context
on what's happening here? The first one is related to a Taichi kernel
launch and looking at the source *probably* okay. The second one refers
to taichi_splatting.misc.autograd and I'm not sure what the restore_grad
function is trying to do (I can't find an active reference).
—
Reply to this email directly, view it on GitHub
<#8>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAITRZMLCAHFB5VQJ3BVXZLYYZLDRAVCNFSM6AAAAABE2ZAK5CVHI2DSMVQWIX3LMV43ASLTON2WKOZSGE4TCMJTGM2DKMQ>
.
You are receiving this because you are subscribed to this thread.Message
ID: ***@***.***>
|
Thanks a lot for the quick response - the linked issue explains this in much more detail. Fingers crossed this will be fixed upstream in Taichi. :) |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Thanks for the amazing package! I just played around with it and found that PyTorch emits two warnings when using the package:
It seems like gradients are still propagated, but those warning are a little concerning - are they safe to ignore? Can you provide more context on what's happening here? The first one is related to a Taichi kernel launch and looking at the source probably okay. The second one refers to
taichi_splatting.misc.autograd
and I'm not sure what therestore_grad
function is trying to do (I can't find an active reference).The text was updated successfully, but these errors were encountered: