OOM problem in RGATConv
#9716
Unanswered
songsong0425
asked this question in
Q&A
Replies: 1 comment
-
I'd suggest checking the number of parameters in your model and calculating the model size, or you could also try profiling your GPU memory with PyTorch profiler. https://pytorch.org/blog/understanding-gpu-memory-1/, https://pytorch.org/blog/understanding-gpu-memory-2/ might be helpful :) |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi, always thank you for your effort in the library maintenance.
I have a question about the OOM problem in
RGATConv
.When I converted the original model using
GATConv
, it didn't evoke any problem and it was able to control using the hyperparameters, butRGATConv
returned the OOM message as below:I guess that
RGATConv
saved the larger size of weights or parameters, and it evoked the problem. However, it's hard to solve the problem because the memory was allocated automatically during training.Can anyone suggest any tips for this situation?
Beta Was this translation helpful? Give feedback.
All reactions