You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,
I am facing problem in visualizing the heatmaps when I train ResNet model and use the LRP for visualizing the heatmaps of the cats and dog image dataset. Below I have attached the heatmap of the lrp relevance (analyzer output) on left side , masked lrp on top of original image (right side image) and also attached is the distribution plot of the relevance matrix (analyzer output relevance values). I found out that for many images I am getting negative relevances even if I use alpha_1_beta_0 (consider only positive relevances) rule. I do not face this issue when I use simple CNN mode, VGG16 model. What could be the reason of the negative relevance in ResNet for alpha_1_1bata_0 rule? How to fix it. Kindly help me to fix it.
Thanks in advance
#Resnet Tiny Model
import numpy as np
import tensorflow as tf
from tensorflow.keras.layers import Input, Conv2D, BatchNormalization, ReLU, Add, AveragePooling2D, Flatten, Dense
from tensorflow.keras.models import Model
from tensorflow.keras.utils import plot_model
def residual_block(x, filters, stride=1):
shortcut = x
x = Conv2D(filters, kernel_size=(1, 1), strides=stride, padding='same', data_format='channels_last')(x)
x = BatchNormalization()(x)
x = ReLU()(x)
x = Conv2D(filters, kernel_size=(3, 3), padding='same')(x)
x = BatchNormalization()(x)
if stride != 1:
shortcut = Conv2D(filters, kernel_size=(1, 1), strides=stride)(shortcut)
shortcut = BatchNormalization()(shortcut)
x = Add()([x, shortcut])
x = ReLU()(x)
return x
def tiny_resnet(input_shape, num_classes, **modelParams):
print('input_shape: ', input_shape, input_shape[1:]) #input_shape: (1800, 124, 124, 3)
inputs = Input(shape=input_shape[1:])
x = Conv2D(64, kernel_size=(5, 5), strides=2, padding='same')(inputs)
x = BatchNormalization()(x)
x = ReLU()(x)
x = AveragePooling2D(pool_size=(3, 3), strides=2, padding='same', data_format='channels_last')(x)
# Only one residual block
x = residual_block(x, 64)
x = AveragePooling2D(pool_size=(3, 3))(x)
x = Flatten()(x)
x = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=inputs, outputs=x)
plot_model(model, to_file='tiny_resnet_diagram.png', show_shapes=True, show_layer_names=True)
return model
Expected behavior
I should get only the positive relevance values as I am using the lrp.alpha_1_beta_0 rule.
Screenshots
If applicable, add screenshots to help explain your problem.
Original test Image:
Platform information
OS: [Windows 11]
Python version: [3.8]
iNNvestigate version: [version 1, and also with v2.1.2]
Hi,
I am facing problem in visualizing the heatmaps when I train ResNet model and use the LRP for visualizing the heatmaps of the cats and dog image dataset. Below I have attached the heatmap of the lrp relevance (analyzer output) on left side , masked lrp on top of original image (right side image) and also attached is the distribution plot of the relevance matrix (analyzer output relevance values). I found out that for many images I am getting negative relevances even if I use alpha_1_beta_0 (consider only positive relevances) rule. I do not face this issue when I use simple CNN mode, VGG16 model. What could be the reason of the negative relevance in ResNet for alpha_1_1bata_0 rule? How to fix it. Kindly help me to fix it.
Thanks in advance
Expected behavior
I should get only the positive relevance values as I am using the lrp.alpha_1_beta_0 rule.
Screenshots
If applicable, add screenshots to help explain your problem.
Original test Image:
Platform information
Model Trained on Cat and Dog Dataset
model.zip
The text was updated successfully, but these errors were encountered: