You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm having a similar issue as @SaumilShah66 in that I'm getting infs in adf.Softmax and subsequently nans in the heteroscedastic softmax loss function. In the paper you seem to suggest that you do not use a heteroscedastic loss as this is intended for regression problems. Is there any reason why you're using it in the training code for the classification problem?
The text was updated successfully, but these errors were encountered:
As I replied to @SaumilShah66, it is a known problem that training with the heteroscedastic loss may be difficult because of numerical instability problems. As you noticed, we mentioned in the paper that it wasn't possible to train the heteroscedastic neural network from Kendall et al. because of numerical instability enhanced by the SoftMax layer. To address this problem when training the ADF network with the heteroscedastic loss (which we needed for sake of completeness), we initialized the network weights from the best pretrained ckpt on Resnet-18 with and without dropout. You can try it yourself, no modification to the code is needed, you only need to load one of the two available ckpts before starting to train.
I'm having a similar issue as @SaumilShah66 in that I'm getting infs in adf.Softmax and subsequently nans in the heteroscedastic softmax loss function. In the paper you seem to suggest that you do not use a heteroscedastic loss as this is intended for regression problems. Is there any reason why you're using it in the training code for the classification problem?
The text was updated successfully, but these errors were encountered: