-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix/normalized hypergraph laplacian #648
base: main
Are you sure you want to change the base?
Fix/normalized hypergraph laplacian #648
Conversation
Fixes the implementation of `normalized_hypergraph_laplacian` to prevent negative eigenvalues. Rewrites core matrix calculations in full definition.
Adds a proprty test for eigenvalue sign. Adds error tests for new `weights` variable.
Updated 'Raises' portion of function docstring to include type and length error catches on `weights` parameter.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #648 +/- ##
==========================================
- Coverage 93.45% 93.35% -0.11%
==========================================
Files 64 64
Lines 4993 5008 +15
==========================================
+ Hits 4666 4675 +9
- Misses 327 333 +6 ☔ View full report in Codecov by Sentry. |
|
||
L3, d = xgi.normalized_hypergraph_laplacian(H, index=True) | ||
evals = eigh(L2, eigvals_only=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
if you don't need the eigenvalues, you can use eigvalsh
, which outputs them in ascending order. I think it's supposed to be faster
D = degree_matrix(H) | ||
A, rowdict = clique_motif_matrix(H, sparse=sparse, index=True) | ||
if weights is not None: | ||
if weights is not list: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
i don't think this works? better to use isinstance to check for type
De_inv = np.diag(list( | ||
map( | ||
lambda x: 1/x, | ||
np.sum(H_, axis=0) | ||
) | ||
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
De_inv = np.diag(list( | |
map( | |
lambda x: 1/x, | |
np.sum(H_, axis=0) | |
) | |
)) | |
De = np.sum(H_, axis=0) | |
De_inv = np.diag(1 / De) |
A bit more legible and hopefully a bit faster
|
||
Dv = degree_matrix(H) | ||
( | ||
H_, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was confusing H_ with a hypergraph, maybe replace by incidence
or other (even if they call it H in the paper
# PERF: There is a faster way to do this calculation if unweighted. | ||
# W can be ignored entirely if unweighted, but it adds a couple conditionals and complicates the code. | ||
# It is untested, but I suspect the performance change is negligible. | ||
L = (eye - (Dv_invsqrt @ H_ @ W @ De_inv @ np.transpose(H_) @ Dv_invsqrt)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
does this np.transpose function work when H_ is a sparse array?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The performance comment should be moved to an issue or solved with this PR.
|
||
L3, d = xgi.normalized_hypergraph_laplacian(H, index=True) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
values or at least eigenvalues need to be tested in this case (only sparse case)
De_inv = diags(list( | ||
map( | ||
lambda x: 1/x, | ||
np.sum(H_, axis=0) | ||
) | ||
)) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
maybe try to simplify similarly to the non-sparse case. Also, is np.sum(H_, axis=0) still sparse?
Thanks for the PR! It looks good, I only left a few comments to check that everything is okay in the sparse case, and to simplify a few lines. About the weights, I'm thinking maybe we want the argument to be boolean instead: That's how we have it for the adjacency matrix now. |
Summary
Rewrote
normalized_hypergraph_laplacian
implementation to match reference implementation. Fixes #647 .Description
Rewrote the matrix calculations in
normalized_hypergraph_laplacian
to match the implementation referenced here: #647 (comment). Also adds aweights
parameter to allow for differentially weighting edges in the Laplacian calculation.Concerns
I have not added any unit tests for the effect of edge weights (only on their acceptable definition). I am unsure how or what the expected behavior would be.
Other
Currently only supports strictly positive weights - a weight of zero would cause some issues with rank. That said, an edge weight of 0 could be interpreted as the subgraph with that edge removed and could be a nice way to handle zero weights in the future?