-
Notifications
You must be signed in to change notification settings - Fork 233
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How does LRP works when used in Autoencoders? #272
Comments
As you wrap a keras model into an explainer, it is up to you if you only want to analyse your latent space after your encoder or if you want to see what the reconstructions explained: Do you want to explain single dimensions in your latent space or do you want to explain single reconstructions in your reconstructions. Most other applications have a classic supervised setup, where the predicted class (logits/score) is explained. |
Thanks for your reply @enryH ! If I want to explain the reconstructions of my inputs, do I need to change anything in the normal configuration of my LRP analyzer? |
Normally it is advised (in papers) to look at the logits, so you need to remove the activation. What loss are you using? |
Yes, I already removed the activation function! For the loss I'm using the MSE @enryH |
@HugoTex98 If you like you can also check out our paper https://arxiv.org/abs/1910.13140 that goes beyond explaining single dimensions in the latent space. |
Hello everyone!
I decided to use this amazing toolbox in my Autoencoder model but I'm having doubts about how it works in this type of models...
How will relevance scores be calculated in this case? Is it done in my encoder or when input is reconstructed?
Can anyone help me in this question?
My model is the following:
`
`
The text was updated successfully, but these errors were encountered: