We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
For a regression task, I am using a mid-size CNN consisting of Conv and MaxPool layers in the first layers and Dense layers in the last layers.
This is how I integrate the evidential loss (Before I used MSE loss):
optimizer = tf.keras.optimizers.Adam(learning_rate=7e-7) def EvidentialRegressionLoss(true, pred): return edl.losses.EvidentialRegression(true, pred, coeff=CONFIG.EDL_COEFF) model.compile( optimizer=optimizer, loss=EvidentialRegressionLoss, metrics=["mae"] )
This is how I integrated the layer DenseNormalGamma:
# lots of ConvLayers model.add(layers.Conv2D(filters=256, kernel_size=(3, 3), padding="same", activation="relu")) model.add(layers.Conv2D(filters=256, kernel_size=(3, 3), padding="same", activation="relu")) model.add(layers.MaxPooling2D(pool_size=(2, 2))) model.add(layers.Flatten()) model.add(layers.Dense(1024, activation="relu")) model.add(layers.Dense(128, activation="relu")) model.add(edl.layers.DenseNormalGamma(1)) # Instead of Dense(1) return model
Here is the issue I am facing:
0.0007=7e-4
7e-7
7e-9
Is there any obvious mistake I make? Any thoughts and help appreciated
The text was updated successfully, but these errors were encountered:
This is maybe because of
evidential-deep-learning/evidential_deep_learning/losses/continuous.py
Line 35 in 7a22a2c
Sorry, something went wrong.
So hou
This is maybe because of evidential-deep-learning/evidential_deep_learning/losses/continuous.py Line 35 in 7a22a2c - alpha*tf.math.log(twoBlambda) \ , where the log is not safe.
, where the log is not safe.
I have met the same problem, could you tell me how to solve it?
No branches or pull requests
For a regression task, I am using a mid-size CNN consisting of Conv and MaxPool layers in the first layers and Dense layers in the last layers.
This is how I integrate the evidential loss (Before I used MSE loss):
This is how I integrated the layer DenseNormalGamma:
Here is the issue I am facing:
0.0007=7e-4
as a learning rate that worked well.7e-7
) I get loss=NaN, mostly already in the very first epoch of training7e-9
) I don't get NaN but of course the network is not learning fast enoughIs there any obvious mistake I make? Any thoughts and help appreciated
The text was updated successfully, but these errors were encountered: