You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It seems that relevances are not conserved across layers when using LRP-A1B0, LRP-W2, and LRP-Bounded rules. I am able to manually write the code to conserve these relevances if needed and don't fully understand why the library cannot do it. I do have some intuition though as you can see in the minimum working example below. The Manual LRP-A1B0 without bias in the numerator and only one bias in the denominator can completely reproduce the library's A1B0 output. While this is great, not having a multiple of the bias term in the denominator means that relevances are then not conserved across layers. Using the library's LRP-A1B0-IB seems to improve things a bit, but not up to machine precision.
In mathematical terms, what the library is doing is this (for a 2x2 weight matrix and 2x1 bias and LRP-A1B0 rule):
I expected the relevance to be conserved for all these rules, especially since manually implementing them for a single-layer network I am able to achieve relevance conservation across the layers up to machine precision.
Any help/insights are appreciated. Thanks a lot!
The text was updated successfully, but these errors were encountered:
Describe the bug
It seems that relevances are not conserved across layers when using LRP-A1B0, LRP-W2, and LRP-Bounded rules. I am able to manually write the code to conserve these relevances if needed and don't fully understand why the library cannot do it. I do have some intuition though as you can see in the minimum working example below. The Manual LRP-A1B0 without bias in the numerator and only one bias in the denominator can completely reproduce the library's A1B0 output. While this is great, not having a multiple of the bias term in the denominator means that relevances are then not conserved across layers. Using the library's LRP-A1B0-IB seems to improve things a bit, but not up to machine precision.
In mathematical terms, what the library is doing is this (for a 2x2 weight matrix and 2x1 bias and LRP-A1B0 rule):
What should be expected for relevance conservation is this:
A (almost) minimum working example can be found here: https://github.com/Shreyas911/XAIRT/blob/main/examples_TomsQoI/LRP_manual_MWE.ipynb
Expected behavior
I expected the relevance to be conserved for all these rules, especially since manually implementing them for a single-layer network I am able to achieve relevance conservation across the layers up to machine precision.
Any help/insights are appreciated. Thanks a lot!
The text was updated successfully, but these errors were encountered: