Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Problem with Add Layer in 1D ResNet network #287

Closed
mg97tud opened this issue Sep 6, 2022 · 1 comment
Closed

[BUG] Problem with Add Layer in 1D ResNet network #287

mg97tud opened this issue Sep 6, 2022 · 1 comment
Labels

Comments

@mg97tud
Copy link

mg97tud commented Sep 6, 2022

I am using Innvestigate 2.0.0 and want to explain ECG classifications with guided backpropagation.
My model has the form:

#Pre
nb_classes = 3
n_feature_maps = 64
input_layer = keras.layers.Input(shape = (2700,1))
conv_1 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(input_layer)
conv_1 = keras.layers.BatchNormalization()(conv_1)
conv_1 = keras.layers.Activation('relu')(conv_1)
conv_2 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(conv_1)
conv_2 = keras.layers.BatchNormalization()(conv_2)
conv_2 = keras.layers.Activation('relu')(conv_2)
conv_2 = keras.layers.Dropout(.1)(conv_2)
conv_3 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(conv_2)
shortcut_1 = keras.layers.Add()([conv_3, conv_1])
#Block 1
res_1 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(shortcut_1)
res_1 = keras.layers.BatchNormalization()(res_1)
res_1 = keras.layers.Activation('relu')(res_1)
res_1 = keras.layers.Dropout(.1)(res_1)
res_2 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(res_1)
res_2 = keras.layers.BatchNormalization()(res_2)
res_2 = keras.layers.Activation('relu')(res_2)
res_2 = keras.layers.Dropout(.1)(res_2)
res_3 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(res_2)
shortcut_2 = keras.layers.Add()([res_3, res_1])
#Block 2
res_4 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(shortcut_2)
res_4 = keras.layers.BatchNormalization()(res_4)
res_4 = keras.layers.Activation('relu')(res_4)
res_4 = keras.layers.Dropout(.1)(res_4)
res_5 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(res_4)
res_5 = keras.layers.BatchNormalization()(res_5)
res_5 = keras.layers.Activation('relu')(res_5)
res_5 = keras.layers.Dropout(.1)(res_5)
res_6 = keras.layers.Conv1D(filters=n_feature_maps, kernel_size=16, padding='same')(res_5)
shortcut_3 = keras.layers.Add()([res_6, res_4])
#Final
fin_1 = keras.layers.BatchNormalization()(shortcut_3)
fin_1 = keras.layers.Activation('relu')(fin_1)
fin_1 = keras.layers.GlobalAveragePooling1D()(fin_1)
output_layer = keras.layers.Dense(nb_classes, activation='softmax')(fin_1)

And the problem arises when i want to execute:
analysis_guidedback_resnet = analyzer_guidedback_resnet.analyze(X)
in the "apply(layer: Layer, inputs: OptionalList[Tensor]) -> list[Tensor]:" function in the "init.py" in the "backend" folder.
when it tries to execute "ret = layer(inputs[0])" in line 162.

The concrete error is:

Nachricht = 2 root error(s) found.
(0) Invalid argument: You must feed a value for placeholder tensor 'input_1' with dtype float and shape [?,2700,1]
[[{{node input_1}}]]
[[gradients_4/batch_normalization_6/moments/SquaredDifference_grad/scalar/715]]
(1) Invalid argument: You must feed a value for placeholder tensor 'input_1' with dtype float and shape [?,2700,1]
[[{{node input_1}}]]
0 successful operations.
0 derived errors ignored.
Quelle = C:\LayerRelevanceDA\innvestigate\backend_init
.py
Stapelüberwachung:
File "C:\LayerRelevanceDA\innvestigate\backend_init_.py", line 162, in apply
ret = layer(inputs[0])
File "C:\LayerRelevanceDA\innvestigate\analyzer\gradient_based.py", line 223, in guided_backprop_reverse_relu_layer
reversed_Ys = ibackend.apply(activation, reversed_Ys)
File "C:\LayerRelevanceDA\innvestigate\backend\graph.py", line 1245, in reverse_model
"stop_mapping_at_ids": local_stop_mapping_at_ids,
File "C:\LayerRelevanceDA\innvestigate\analyzer\reverse_base.py", line 251, in _reverse_model
return_all_reversed_tensors=return_all_reversed_tensors,
File "C:\LayerRelevanceDA\innvestigate\analyzer\reverse_base.py", line 272, in _create_analysis
return_all_reversed_tensors=return_all_reversed_tensors,
File "C:\LayerRelevanceDA\innvestigate\analyzer\gradient_based.py", line 257, in _create_analysis
return super()._create_analysis(*args, **kwargs)
File "C:\LayerRelevanceDA\innvestigate\analyzer\network_base.py", line 166, in create_analyzer_model
model, stop_analysis_at_tensors=stop_analysis_at_tensors
File "C:\LayerRelevanceDA\innvestigate\analyzer\network_base.py", line 251, in analyze
self.create_analyzer_model()
File "C:\LayerRelevanceDA\visualize_importance.py", line 175, in (Current frame)
analysis_guidedback_resnet = analyzer_guidedback_resnet.analyze(X)

When the error arises the inputs[0] Tensor from ret = layer(inputs[0]) has the Name gradients_7/AddN:0 so i guess there is some Problem with the Add Layer.

With my current setup I already investigated other models without problem, only now, when I use a ResNet structure with the Add layer i get the problem.

I also tried if I can explain my ResNet predictions with DeepTaylor and get the same error:

Nachricht = 2 root error(s) found.
(0) Invalid argument: You must feed a value for placeholder tensor 'input_1' with dtype float and shape [?,2700,1]
[[{{node input_1}}]]
[[gradients_6/batch_normalization_6/moments/SquaredDifference_grad/scalar/_759]]
(1) Invalid argument: You must feed a value for placeholder tensor 'input_1' with dtype float and shape [?,2700,1]
[[{{node input_1}}]]
0 successful operations.
0 derived errors ignored.
Quelle = C:\LayerRelevanceDA\innvestigate\analyzer\relevance_based\relevance_rule.py
Stapelüberwachung:
File "C:\Users\marcg\Desktop\DA_Code\LayerRelevanceDA\innvestigate\analyzer\relevance_based\relevance_rule.py", line 264, in
tmp1 = [klayers.Multiply()([a, b]) for a, b in zip(X1, grads1)]
File "C:\LayerRelevanceDA\innvestigate\analyzer\relevance_based\relevance_rule.py", line 264, in fn_tmp
tmp1 = [klayers.Multiply()([a, b]) for a, b in zip(X1, grads1)]
File "C:\LayerRelevanceDA\innvestigate\analyzer\relevance_based\relevance_rule.py", line 274, in apply
self._layer_wo_act_positive, self._layer_wo_act_negative, Xs_pos, Xs_neg
File "C:\LayerRelevanceDA\innvestigate\backend\graph.py", line 1245, in reverse_model
"stop_mapping_at_ids": local_stop_mapping_at_ids,
File "C:\LayerRelevanceDA\innvestigate\analyzer\reverse_base.py", line 251, in _reverse_model
return_all_reversed_tensors=return_all_reversed_tensors,
File "C:\LayerRelevanceDA\innvestigate\analyzer\reverse_base.py", line 272, in _create_analysis
return_all_reversed_tensors=return_all_reversed_tensors,
File "C:\LayerRelevanceDA\innvestigate\analyzer\deeptaylor.py", line 134, in _create_analysis
return super()._create_analysis(*args, **kwargs)
File "C:\LayerRelevanceDA\innvestigate\analyzer\deeptaylor.py", line 193, in _create_analysis
return super()._create_analysis(*args, **kwargs)
File "C:\LayerRelevanceDA\innvestigate\analyzer\network_base.py", line 166, in create_analyzer_model
model, stop_analysis_at_tensors=stop_analysis_at_tensors
File "C:\LayerRelevanceDA\innvestigate\analyzer\network_base.py", line 251, in analyze
self.create_analyzer_model()
File "C:\LayerRelevanceDA\visualize_importance.py", line 175, in (Current frame)
analysis_dt_resnet = analyzer_dt_resnet.analyze(X)

Maybe someone can guide me to setting an extra option i am not aware of.

@mg97tud mg97tud added the bug label Sep 6, 2022
@adrhill
Copy link
Collaborator

adrhill commented Oct 11, 2022

Hi @mg97tud,

this looks related to #292.

As a temporary workaround until we've fixed this issue, you could try to convert your BatchNormalization layers to Dense layers after training the model.

@adrhill adrhill closed this as completed Oct 11, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants