-
Hello, We are working on using your models for comparing performance on a domain-specific dataset we have. 'Data(x=[26695, 16], edge_index=[2, 174347], y=[26695], train_mask=[26695], val_mask=[26695], test_mask=[26695])' We wanted to compare the following models with each other:
We tried different batch sizes and it seems to work on sizes under 2400 but the AUC scores are around 0.5-0.60. Do you have any ideas on what might go wrong? Previously the reconstructions x_ and s_ couldn't be computed because the Data object contained isolated nodes but we have removed those and we've run also run data.validate() which returns True. We greatly appreciate your library - thank you! |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment 1 reply
-
Thank you for the post! As the loss value in epoch 0 is quite large, I guess the nan is caused by large reconstruction loss either for the feature or for the structure (adjacency matrix). If it is caused by the feature, trying to normalize the feature can help. Otherwise, it may need to use a small batch size to reduce the structural reconstruction loss. |
Beta Was this translation helpful? Give feedback.
Thank you for the post! As the loss value in epoch 0 is quite large, I guess the nan is caused by large reconstruction loss either for the feature or for the structure (adjacency matrix). If it is caused by the feature, trying to normalize the feature can help. Otherwise, it may need to use a small batch size to reduce the structural reconstruction loss.